Sample records for statistical model schroedinger

  1. Initial study of Schroedinger eigenmaps for spectral target detection

    NASA Astrophysics Data System (ADS)

    Dorado-Munoz, Leidy P.; Messinger, David W.

    2016-08-01

    Spectral target detection refers to the process of searching for a specific material with a known spectrum over a large area containing materials with different spectral signatures. Traditional target detection methods in hyperspectral imagery (HSI) require assuming the data fit some statistical or geometric models and based on the model, to estimate parameters for defining a hypothesis test, where one class (i.e., target class) is chosen over the other classes (i.e., background class). Nonlinear manifold learning methods such as Laplacian eigenmaps (LE) have extensively shown their potential use in HSI processing, specifically in classification or segmentation. Recently, Schroedinger eigenmaps (SE), which is built upon LE, has been introduced as a semisupervised classification method. In SE, the former Laplacian operator is replaced by the Schroedinger operator. The Schroedinger operator includes by definition, a potential term V that steers the transformation in certain directions improving the separability between classes. In this regard, we propose a methodology for target detection that is not based on the traditional schemes and that does not need the estimation of statistical or geometric parameters. This method is based on SE, where the potential term V is taken into consideration to include the prior knowledge about the target class and use it to steer the transformation in directions where the target location in the new space is known and the separability between target and background is augmented. An initial study of how SE can be used in a target detection scheme for HSI is shown here. In-scene pixel and spectral signature detection approaches are presented. The HSI data used comprise various target panels for testing simultaneous detection of multiple objects with different complexities.

  2. Hidden Statistics of Schroedinger Equation

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2011-01-01

    Work was carried out in determination of the mathematical origin of randomness in quantum mechanics and creating a hidden statistics of Schr dinger equation; i.e., to expose the transitional stochastic process as a "bridge" to the quantum world. The governing equations of hidden statistics would preserve such properties of quantum physics as superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods.

  3. Detecting Moving Targets by Use of Soliton Resonances

    NASA Technical Reports Server (NTRS)

    Zak, Michael; Kulikov, Igor

    2003-01-01

    A proposed method of detecting moving targets in scenes that include cluttered or noisy backgrounds is based on a soliton-resonance mathematical model. The model is derived from asymptotic solutions of the cubic Schroedinger equation for a one-dimensional system excited by a position-and-time-dependent externally applied potential. The cubic Schroedinger equation has general significance for time-dependent dispersive waves. It has been used to approximate several phenomena in classical as well as quantum physics, including modulated beams in nonlinear optics, and superfluids (in particular, Bose-Einstein condensates). In the proposed method, one would take advantage of resonant interactions between (1) a soliton excited by the position-and-time-dependent potential associated with a moving target and (2) eigen-solitons, which represent dispersive waves and are solutions of the cubic Schroedinger equation for a time-independent potential.

  4. The thermal-wave model: A Schroedinger-like equation for charged particle beam dynamics

    NASA Technical Reports Server (NTRS)

    Fedele, Renato; Miele, G.

    1994-01-01

    We review some results on longitudinal beam dynamics obtained in the framework of the Thermal Wave Model (TWM). In this model, which has recently shown the capability to describe both longitudinal and transverse dynamics of charged particle beams, the beam dynamics is ruled by Schroedinger-like equations for the beam wave functions, whose squared modulus is proportional to the beam density profile. Remarkably, the role of the Planck constant is played by a diffractive constant epsilon, the emittance, which has a thermal nature.

  5. Newtonian semiclassical gravity in three ontological quantum theories that solve the measurement problem: Formalisms and empirical predictions

    NASA Astrophysics Data System (ADS)

    Derakhshani, Maaneli

    In this thesis, we consider the implications of solving the quantum measurement problem for the Newtonian description of semiclassical gravity. First we review the formalism of the Newtonian description of semiclassical gravity based on standard quantum mechanics---the Schroedinger-Newton theory---and two well-established predictions that come out of it, namely, gravitational 'cat states' and gravitationally-induced wavepacket collapse. Then we review three quantum theories with 'primitive ontologies' that are well-known known to solve the measurement problem---Schroedinger's many worlds theory, the GRW collapse theory with matter density ontology, and Nelson's stochastic mechanics. We extend the formalisms of these three quantum theories to Newtonian models of semiclassical gravity and evaluate their implications for gravitational cat states and gravitational wavepacket collapse. We find that (1) Newtonian semiclassical gravity based on Schroedinger's many worlds theory is mathematically equivalent to the Schroedinger-Newton theory and makes the same predictions; (2) Newtonian semiclassical gravity based on the GRW theory differs from Schroedinger-Newton only in the use of a stochastic collapse law, but this law allows it to suppress gravitational cat states so as not to be in contradiction with experiment, while allowing for gravitational wavepacket collapse to happen as well; (3) Newtonian semiclassical gravity based on Nelson's stochastic mechanics differs significantly from Schroedinger-Newton, and does not predict gravitational cat states nor gravitational wavepacket collapse. Considering that gravitational cat states are experimentally ruled out, but gravitational wavepacket collapse is testable in the near future, this implies that only the latter two are viable theories of Newtonian semiclassical gravity and that they can be experimentally tested against each other in future molecular interferometry experiments that are anticipated to be capable of testing the gravitational wavepacket collapse prediction.

  6. On the relationship between the classical Dicke-Jaynes-Cummings-Gaudin model and the nonlinear Schroedinger equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Du, Dianlou; Geng, Xue

    2013-05-15

    In this paper, the relationship between the classical Dicke-Jaynes-Cummings-Gaudin (DJCG) model and the nonlinear Schroedinger (NLS) equation is studied. It is shown that the classical DJCG model is equivalent to a stationary NLS equation. Moreover, the standard NLS equation can be solved by the classical DJCG model and a suitably chosen higher order flow. Further, it is also shown that classical DJCG model can be transformed into the classical Gaudin spin model in an external magnetic field through a deformation of Lax matrix. Finally, the separated variables are constructed on the common level sets of Casimir functions and the generalizedmore » action-angle coordinates are introduced via the Hamilton-Jacobi equation.« less

  7. Stochasticity in numerical solutions of the nonlinear Schroedinger equation

    NASA Technical Reports Server (NTRS)

    Shen, Mei-Mei; Nicholson, D. R.

    1987-01-01

    The cubically nonlinear Schroedinger equation is an important model of nonlinear phenomena in fluids and plasmas. Numerical solutions in a spatially periodic system commonly involve truncation to a finite number of Fourier modes. These solutions are found to be stochastic in the sense that the largest Liapunov exponent is positive. As the number of modes is increased, the size of this exponent appears to converge to zero, in agreement with the recent demonstration of the integrability of the spatially periodic case.

  8. Construction of stable explicit finite-difference schemes for Schroedinger type differential equations

    NASA Technical Reports Server (NTRS)

    Mickens, Ronald E.

    1989-01-01

    A family of conditionally stable, forward Euler finite difference equations can be constructed for the simplest equation of Schroedinger type, namely u sub t - iu sub xx. Generalization of this result to physically realistic Schroedinger type equations is presented.

  9. Linear canonical transformations of coherent and squeezed states in the Wigner phase space. III - Two-mode states

    NASA Technical Reports Server (NTRS)

    Han, D.; Kim, Y. S.; Noz, Marilyn E.

    1990-01-01

    It is shown that the basic symmetry of two-mode squeezed states is governed by the group SP(4) in the Wigner phase space which is locally isomorphic to the (3 + 2)-dimensional Lorentz group. This symmetry, in the Schroedinger picture, appears as Dirac's two-oscillator representation of O(3,2). It is shown that the SU(2) and SU(1,1) interferometers exhibit the symmetry of this higher-dimensional Lorentz group. The mathematics of two-mode squeezed states is shown to be applicable to other branches of physics including thermally excited states in statistical mechanics and relativistic extended hadrons in the quark model.

  10. A new fundamental model of moving particle for reinterpreting Schroedinger equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Umar, Muhamad Darwis

    2012-06-20

    The study of Schroedinger equation based on a hypothesis that every particle must move randomly in a quantum-sized volume has been done. In addition to random motion, every particle can do relative motion through the movement of its quantum-sized volume. On the other way these motions can coincide. In this proposed model, the random motion is one kind of intrinsic properties of the particle. The every change of both speed of randomly intrinsic motion and or the velocity of translational motion of a quantum-sized volume will represent a transition between two states, and the change of speed of randomly intrinsicmore » motion will generate diffusion process or Brownian motion perspectives. Diffusion process can take place in backward and forward processes and will represent a dissipative system. To derive Schroedinger equation from our hypothesis we use time operator introduced by Nelson. From a fundamental analysis, we find out that, naturally, we should view the means of Newton's Law F(vector sign) = ma(vector sign) as no an external force, but it is just to describe both the presence of intrinsic random motion and the change of the particle energy.« less

  11. Topics in strong Langmuir turbulence

    NASA Technical Reports Server (NTRS)

    Nicholson, D. R.

    1983-01-01

    Progress in two approaches to the study of strong Langmuir turbulence is reported. In two spatial dimensions, numerical solution of the Zakharov equations yields a steady state involving linear growth, linear damping, and a collection of coherent, long-lived entities which might loosely be called solitons. In one spatial dimension, a statistical theory is applied to the cubically nonlinear Schroedinger equation and is solved analytically in a special case.

  12. Topics in strong Langmuir turbulence

    NASA Technical Reports Server (NTRS)

    Nicholson, D. R.

    1982-01-01

    Progress in two approaches to the study of strong Langmuir turbulence is reported. In two spatial dimensions, numerical solution of the Zakharov equations yields a steady state involving linear growth, linear damping, and a collection of coherent, long-lived entities which might loosely be called solitons. In one spatial dimension, a statistical theory is applied to the cubically nonlinear Schroedinger equation and is solved analytically in a special case.

  13. Course 4: Anyons

    NASA Astrophysics Data System (ADS)

    Myrheim, J.

    Contents 1 Introduction 1.1 The concept of particle statistics 1.2 Statistical mechanics and the many-body problem 1.3 Experimental physics in two dimensions 1.4 The algebraic approach: Heisenberg quantization 1.5 More general quantizations 2 The configuration space 2.1 The Euclidean relative space for two particles 2.2 Dimensions d=1,2,3 2.3 Homotopy 2.4 The braid group 3 Schroedinger quantization in one dimension 4 Heisenberg quantization in one dimension 4.1 The coordinate representation 5 Schroedinger quantization in dimension d ≥ 2 5.1 Scalar wave functions 5.2 Homotopy 5.3 Interchange phases 5.4 The statistics vector potential 5.5 The N-particle case 5.6 Chern-Simons theory 6 The Feynman path integral for anyons 6.1 Eigenstates for position and momentum 6.2 The path integral 6.3 Conjugation classes in SN 6.4 The non-interacting case 6.5 Duality of Feynman and Schroedinger quantization 7 The harmonic oscillator 7.1 The two-dimensional harmonic oscillator 7.2 Two anyons in a harmonic oscillator potential 7.3 More than two anyons 7.4 The three-anyon problem 8 The anyon gas 8.1 The cluster and virial expansions 8.2 First and second order perturbative results 8.3 Regularization by periodic boundary conditions 8.4 Regularization by a harmonic oscillator potential 8.5 Bosons and fermions 8.6 Two anyons 8.7 Three anyons 8.8 The Monte Carlo method 8.9 The path integral representation of the coefficients GP 8.10 Exact and approximate polynomials 8.11 The fourth virial coefficient of anyons 8.12 Two polynomial theorems 9 Charged particles in a constant magnetic field 9.1 One particle in a magnetic field 9.2 Two anyons in a magnetic field 9.3 The anyon gas in a magnetic field 10 Interchange phases and geometric phases 10.1 Introduction to geometric phases 10.2 One particle in a magnetic field 10.3 Two particles in a magnetic field 10.4 Interchange of two anyons in potential wells 10.5 Laughlin's theory of the fractional quantum Hall effect

  14. The harmonic oscillator and the position dependent mass Schroedinger equation: isospectral partners and factorization operators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morales, J.; Ovando, G.; Pena, J. J.

    2010-12-23

    One of the most important scientific contributions of Professor Marcos Moshinsky has been his study on the harmonic oscillator in quantum theory vis a vis the standard Schroedinger equation with constant mass [1]. However, a simple description of the motion of a particle interacting with an external environment such as happen in compositionally graded alloys consist of replacing the mass by the so-called effective mass that is in general variable and dependent on position. Therefore, honoring in memoriam Marcos Moshinsky, in this work we consider the position-dependent mass Schrodinger equations (PDMSE) for the harmonic oscillator potential model as former potentialmore » as well as with equi-spaced spectrum solutions, i.e. harmonic oscillator isospectral partners. To that purpose, the point canonical transformation method to convert a general second order differential equation (DE), of Sturm-Liouville type, into a Schroedinger-like standard equation is applied to the PDMSE. In that case, the former potential associated to the PDMSE and the potential involved in the Schroedinger-like standard equation are related through a Riccati-type relationship that includes the equivalent of the Witten superpotential to determine the exactly solvable positions-dependent mass distribution (PDMD)m(x). Even though the proposed approach is exemplified with the harmonic oscillator potential, the procedure is general and can be straightforwardly applied to other DEs.« less

  15. Hunting for Snarks in Quantum Mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hestenes, David

    2009-12-08

    A long-standing debate over the interpretation of quantum mechanics has centered on the meaning of Schroedinger's wave function {psi} for an electron. Broadly speaking, there are two major opposing schools. On the one side, the Copenhagen school(led by Bohr, Heisenberg and Pauli) holds that {psi} provides a complete description of a single electron state; hence the probability interpretation of {psi}{psi}* expresses an irreducible uncertainty in electron behavior that is intrinsic in nature. On the other side, the realist school(led by Einstein, de Broglie, Bohm and Jaynes) holds that {psi} represents a statistical ensemble of possible electron states; hence it ismore » an incomplete description of a single electron state. I contend that the debaters have overlooked crucial facts about the electron revealed by Dirac theory. In particular, analysis of electron zitterbewegung(first noticed by Schroedinger) opens a window to particle substructure in quantum mechanics that explains the physical significance of the complex phase factor in {psi}. This led to a testable model for particle substructure with surprising support by recent experimental evidence. If the explanation is upheld by further research, it will resolve the debate in favor of the realist school. I give details. The perils of research on the foundations of quantum mechanics have been foreseen by Lewis Carroll in The Hunting of the Snark{exclamation_point}.« less

  16. Spectral Target Detection using Schroedinger Eigenmaps

    NASA Astrophysics Data System (ADS)

    Dorado-Munoz, Leidy P.

    Applications of optical remote sensing processes include environmental monitoring, military monitoring, meteorology, mapping, surveillance, etc. Many of these tasks include the detection of specific objects or materials, usually few or small, which are surrounded by other materials that clutter the scene and hide the relevant information. This target detection process has been boosted lately by the use of hyperspectral imagery (HSI) since its high spectral dimension provides more detailed spectral information that is desirable in data exploitation. Typical spectral target detectors rely on statistical or geometric models to characterize the spectral variability of the data. However, in many cases these parametric models do not fit well HSI data that impacts the detection performance. On the other hand, non-linear transformation methods, mainly based on manifold learning algorithms, have shown a potential use in HSI transformation, dimensionality reduction and classification. In target detection, non-linear transformation algorithms are used as preprocessing techniques that transform the data to a more suitable lower dimensional space, where the statistical or geometric detectors are applied. One of these non-linear manifold methods is the Schroedinger Eigenmaps (SE) algorithm that has been introduced as a technique for semi-supervised classification. The core tool of the SE algorithm is the Schroedinger operator that includes a potential term that encodes prior information about the materials present in a scene, and enables the embedding to be steered in some convenient directions in order to cluster similar pixels together. A completely novel target detection methodology based on SE algorithm is proposed for the first time in this thesis. The proposed methodology does not just include the transformation of the data to a lower dimensional space but also includes the definition of a detector that capitalizes on the theory behind SE. The fact that target pixels and those similar pixels are clustered in a predictable region of the low-dimensional representation is used to define a decision rule that allows one to identify target pixels over the rest of pixels in a given image. In addition, a knowledge propagation scheme is used to combine spectral and spatial information as a means to propagate the "potential constraints" to nearby points. The propagation scheme is introduced to reinforce weak connections and improve the separability between most of the target pixels and the background. Experiments using different HSI data sets are carried out in order to test the proposed methodology. The assessment is performed from a quantitative and qualitative point of view, and by comparing the SE-based methodology against two other detection methodologies that use linear/non-linear algorithms as transformations and the well-known Adaptive Coherence/Cosine Estimator (ACE) detector. Overall results show that the SE-based detector outperforms the other two detection methodologies, which indicates the usefulness of the SE transformation in spectral target detection problems.

  17. Schroedinger's Wave Structure of Matter (WSM)

    NASA Astrophysics Data System (ADS)

    Wolff, Milo; Haselhurst, Geoff

    2009-10-01

    The puzzling electron is due to the belief that it is a discrete particle. Einstein deduced this structure was impossible since Nature does not allow the discrete particle. Clifford (1876) rejected discrete matter and suggested structures in `space'. Schroedinger, (1937) also eliminated discrete particles writing: What we observe as material bodies and forces are nothing but shapes and variations in the structure of space. Particles are just schaumkommen (appearances). He rejected wave-particle duality. Schroedinger's concept was developed by Milo Wolff and Geoff Haselhurst (SpaceAndMotion.com) using the Scalar Wave Equation to find spherical wave solutions in a 3D quantum space. This WSM, the origin of all the Natural Laws, contains all the electron's properties including the Schroedinger Equation. The origin of Newton's Law F=ma is no longer a puzzle; It originates from Mach's principle of inertia (1883) that depends on the space medium and the WSM. Carver Mead (1999) at CalTech used the WSM to design Intel micro-chips correcting errors of Maxwell's magnetic Equations. Applications of the WSM also describe matter at molecular dimensions: alloys, catalysts, biology and medicine, molecular computers and memories. See ``Schroedinger's Universe'' - at Amazon.com

  18. The Universe according to Schroedinger and Milo

    NASA Astrophysics Data System (ADS)

    Wolff, Milo

    2009-10-01

    The puzzling electron is due to the belief that it is a discrete particle. Schroedinger, (1937) eliminated discrete particles writing: What we observe as material bodies and forces are nothing but shapes and variations in the structure of space. Particles are just schaumkommen (appearances). Thus he rejected wave-particle duality. Schroedinger's concept was developed by Milo Wolff using a Scalar Wave Equation in 3D quantum space to find wave solutions. The resulting Wave Structure of Matter (WSM) contains all the electron's properties including the Schroedinger Equation. Further, Newton's Law F=ma is no longer a puzzle; It originates from Mach's principle of inertia (1883) that depends on the space medium and the WSM. These the origin of all the Natural Laws. Carver Mead (1999) at CalTech used the WSM to design Intel micro-chips and to correct errors of Maxwell's Equations. Applications of the WSM describe matter at molecular dimensions: Industrial alloys, catalysts, biology and medicine, molecular computers and memories. See book ``Schroedinger's Universe'' - at Amazon.com. Pioneers of the WSM are growing rapidly. Some are: SpaceAndMotion.com, QuantumMatter.com, treeincarnation.com/audio/milowolff.htm, daugerresearch.com/orbitals/index.shtml, glafreniere.com/matter.html =A new Universe.

  19. Kinetic effects on Alfven wave nonlinearity. II - The modified nonlinear wave equation

    NASA Technical Reports Server (NTRS)

    Spangler, Steven R.

    1990-01-01

    A previously developed Vlasov theory is used here to study the role of resonant particle and other kinetic effects on Alfven wave nonlinearity. A hybrid fluid-Vlasov equation approach is used to obtain a modified version of the derivative nonlinear Schroedinger equation. The differences between a scalar model for the plasma pressure and a tensor model are discussed. The susceptibilty of the modified nonlinear wave equation to modulational instability is studied. The modulational instability normally associated with the derivative nonlinear Schroedinger equation will, under most circumstances, be restricted to left circularly polarized waves. The nonlocal term in the modified nonlinear wave equation engenders a new modulational instability that is independent of beta and the sense of circular polarization. This new instability may explain the occurrence of wave packet steepening for all values of the plasma beta in the vicinity of the earth's bow shock.

  20. Niels Bohr's discussions with Albert Einstein, Werner Heisenberg, and Erwin Schroedinger: the origins of the principles of uncertainty and complementarity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mehra, J.

    1987-05-01

    In this paper, the main outlines of the discussions between Niels Bohr with Albert Einstein, Werner Heisenberg, and Erwin Schroedinger during 1920-1927 are treated. From the formulation of quantum mechanics in 1925-1926 and wave mechanics in 1926, there emerged Born's statistical interpretation of the wave function in summer 1926, and on the basis of the quantum mechanical transformation theory - formulated in fall 1926 by Dirac, London, and Jordan - Heisenberg formulated the uncertainty principle in early 1927. At the Volta Conference in Como in September 1927 and at the fifth Solvay Conference in Brussels the following month, Bohr publiclymore » enunciated his complementarity principle, which had been developing in his mind for several years. The Bohr-Einstein discussions about the consistency and completeness of quantum mechanics and of physical theory as such - formally begun in October 1927 at the fifth Solvay Conference and carried on at the sixth Solvay Conference in October 1930 - were continued during the next decades. All these aspects are briefly summarized.« less

  1. Explicit blow-up solutions to the Schroedinger maps from R{sup 2} to the hyperbolic 2-space H{sup 2}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding Qing

    2009-10-15

    In this article, we prove that the equation of the Schroedinger maps from R{sup 2} to the hyperbolic 2-space H{sup 2} is SU(1,1)-gauge equivalent to the following 1+2 dimensional nonlinear Schroedinger-type system of three unknown complex functions p, q, r, and a real function u: iq{sub t}+q{sub zz}-2uq+2(pq){sub z}-2pq{sub z}-4|p|{sup 2}q=0, ir{sub t}-r{sub zz}+2ur+2(pr){sub z}-2pr{sub z}+4|p|{sup 2}r=0, ip{sub t}+(qr){sub z}-u{sub z}=0, p{sub z}+p{sub z}=-|q|{sup 2}+|r|{sup 2}, -r{sub z}+q{sub z}=-2(pr+pq), where z is a complex coordinate of the plane R{sup 2} and z is the complex conjugate of z. Although this nonlinear Schroedinger-type system looks complicated, it admits a class ofmore » explicit blow-up smooth solutions: p=0, q=(e{sup i(bzz/2(a+bt))}/a+bt){alpha}z, r=e{sup -i(bzz/2(a+bt))}/(a+bt){alpha}z, u=2{alpha}{sup 2}zz/(a+bt){sup 2}, where a and b are real numbers with ab<0 and {alpha} satisfies {alpha}{sup 2}=b{sup 2}/16. From these facts, we explicitly construct smooth solutions to the Schroedinger maps from R{sup 2} to the hyperbolic 2-space H{sup 2} by using the gauge transformations such that the absolute values of their gradients blow up in finite time. This reveals some blow-up phenomenon of Schroedinger maps.« less

  2. AKNS hierarchy, Darboux transformation and conservation laws of the 1D nonautonomous nonlinear Schroedinger equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao Dun; Center for Interdisciplinary Studies, Lanzhou University, Lanzhou 730000; Zhang Yujuan

    2011-04-15

    By constructing nonisospectral Ablowitz-Kaup-Newell-Segur (AKNS) hierarchy, we investigate the nonautonomous nonlinear Schroedinger (NLS) equations which have been used to describe the Feshbach resonance management in matter-wave solitons in Bose-Einstein condensate and the dispersion and nonlinearity managements for optical solitons. It is found that these equations are some special cases of a new integrable model of nonlocal nonautonomous NLS equations. Based on the Lax pairs, the Darboux transformation and conservation laws are explored. It is shown that the local external potentials would break down the classical infinite number of conservation laws. The result indicates that the integrability of the nonautonomous NLSmore » systems may be nontrivial in comparison to the conventional concept of integrability in the canonical case.« less

  3. Capillary waves in the subcritical nonlinear Schroedinger equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kozyreff, G.

    2010-01-15

    We expand recent results on the nonlinear Schroedinger equation with cubic-quintic nonlinearity to show that some solutions are described by the Bernoulli equation in the presence of surface tension. As a consequence, capillary waves are predicted and found numerically at the interface between regions of large and low amplitude.

  4. Stochastic Models for Laser Propagation in Atmospheric Turbulence.

    NASA Astrophysics Data System (ADS)

    Leland, Robert Patton

    In this dissertation, stochastic models for laser propagation in atmospheric turbulence are considered. A review of the existing literature on laser propagation in the atmosphere and white noise theory is presented, with a view toward relating the white noise integral and Ito integral approaches. The laser beam intensity is considered as the solution to a random Schroedinger equation, or forward scattering equation. This model is formulated in a Hilbert space context as an abstract bilinear system with a multiplicative white noise input, as in the literature. The model is also modeled in the Banach space of Fresnel class functions to allow the plane wave case and the application of path integrals. Approximate solutions to the Schroedinger equation of the Trotter-Kato product form are shown to converge for each white noise sample path. The product forms are shown to be physical random variables, allowing an Ito integral representation. The corresponding Ito integrals are shown to converge in mean square, providing a white noise basis for the Stratonovich correction term associated with this equation. Product form solutions for Ornstein -Uhlenbeck process inputs were shown to converge in mean square as the input bandwidth was expanded. A digital simulation of laser propagation in strong turbulence was used to study properties of the beam. Empirical distributions for the irradiance function were estimated from simulated data, and the log-normal and Rice-Nakagami distributions predicted by the classical perturbation methods were seen to be inadequate. A gamma distribution fit the simulated irradiance distribution well in the vicinity of the boresight. Statistics of the beam were seen to converge rapidly as the bandwidth of an Ornstein-Uhlenbeck process was expanded to its white noise limit. Individual trajectories of the beam were presented to illustrate the distortion and bending of the beam due to turbulence. Feynman path integrals were used to calculate an approximate expression for the mean of the beam intensity without using the Markov, or white noise, assumption, and to relate local variations in the turbulence field to the behavior of the beam by means of two approximations.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paavola, Janika; Hall, Michael J. W.; Paris, Matteo G. A.

    The transition from quantum to classical, in the case of a quantum harmonic oscillator, is typically identified with the transition from a quantum superposition of macroscopically distinguishable states, such as the Schroedinger-cat state, into the corresponding statistical mixture. This transition is commonly characterized by the asymptotic loss of the interference term in the Wigner representation of the cat state. In this paper we show that the quantum-to-classical transition has different dynamical features depending on the measure for nonclassicality used. Measures based on an operatorial definition have well-defined physical meaning and allow a deeper understanding of the quantum-to-classical transition. Our analysismore » shows that, for most nonclassicality measures, the Schroedinger-cat state becomes classical after a finite time. Moreover, our results challenge the prevailing idea that more macroscopic states are more susceptible to decoherence in the sense that the transition from quantum to classical occurs faster. Since nonclassicality is a prerequisite for entanglement generation our results also bridge the gap between decoherence, which is lost only asymptotically, and entanglement, which may show a ''sudden death''. In fact, whereas the loss of coherences still remains asymptotic, we emphasize that the transition from quantum to classical can indeed occur at a finite time.« less

  6. Vibrational Schroedinger Cats

    NASA Technical Reports Server (NTRS)

    Kis, Z.; Janszky, J.; Vinogradov, An. V.; Kobayashi, T.

    1996-01-01

    The optical Schroedinger cat states are simple realizations of quantum states having nonclassical features. It is shown that vibrational analogues of such states can be realized in an experiment of double pulse excitation of vibrionic transitions. To track the evolution of the vibrational wave packet we derive a non-unitary time evolution operator so that calculations are made in a quasi Heisenberg picture.

  7. Schroedinger operators with the q-ladder symmetry algebras

    NASA Technical Reports Server (NTRS)

    Skorik, Sergei; Spiridonov, Vyacheslav

    1994-01-01

    A class of the one-dimensional Schroedinger operators L with the symmetry algebra LB(+/-) = q(+/-2)B(+/-)L, (B(+),B(-)) = P(sub N)(L), is described. Here B(+/-) are the 'q-ladder' operators and P(sub N)(L) is a polynomial of the order N. Peculiarities of the coherent states of this algebra are briefly discussed.

  8. Schroedinger's Wave Structure of Matter (WSM)

    NASA Astrophysics Data System (ADS)

    Wolff, Milo

    2009-05-01

    The puzzling electron is due to the belief that it is a discrete particle. Einstein deduced this structure impossible since Nature does not match the discrete particle. Clifford (1876) rejected discrete matter and suggested structures in `space'. Schroedinger, (1937) also eliminated discrete particles writing: What we observe as material bodies and forces are nothing but shapes and variations in the structure of space. Particles are just schaumkommen (appearances). He rejected wave-particle duality. Schroedinger's concept was developed by Milo Wolff and Geoff Haselhurst (http://www.SpaceAndMotion.com) using the Scalar Wave Equation to find spherical wave solutions in a 3D quantum space. This WSM is the origin of all the Natural Laws; thus it contains all the electron's properties including the Schroedinger Equation. The origin of Newton's Law F=ma is no longer a puzzle; it is shown to originate from Mach's principle of inertia (1883) that depends on the space medium. Carver Mead (1999) applied the WSM to design Intel micro-chips correcting errors of Maxwell's magnetic Equations. Applications of the WSM describe matter at molecular dimensions: alloys, catalysts, the mechanisms of biology and medicine, molecular computers and memories. See http://www.amazon.com/Schro at Amazon.com.

  9. ‘Schroedinger’s Cat’ Molecules Give Rise to Exquisitely Detailed Movies

    ScienceCinema

    None

    2018-01-16

    One of the most famous mind-twisters of the quantum world is the thought experiment known as “Schroedinger’s Cat,” in which a cat placed in a box and potentially exposed to poison is simultaneously dead and alive until someone opens the box and peeks inside. Scientists have known for a long time that an atom or molecule can also be in two different states at once. Now researchers at the Stanford PULSE Institute and the Department of Energy’s SLAC National Accelerator Laboratory have exploited this Schroedinger’s Cat behavior to create X-ray movies of atomic motion with much more detail than ever before.

  10. Intermittency and solitons in the driven dissipative nonlinear Schroedinger equation

    NASA Technical Reports Server (NTRS)

    Moon, H. T.; Goldman, M. V.

    1984-01-01

    The cubic nonlinear Schroedinger equation, in the presence of driving and Landau damping, is studied numerically. As the pump intensity is increased, the system exhibits a transition from intermittency to a two-torus to chaos. The laminar phase of the intermittency is also a two-torus motion which corresponds in physical space to two identical solitons of amplitude determined by a power-balance equation.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Podoshvedov, S. A., E-mail: podoshvedov@mail.ru

    A method to generate Schroedinger cat states in free propagating optical fields based on the use of displaced states (or displacement operators) is developed. Some optical schemes with photon-added coherent states are studied. The schemes are modifications of the general method based on a sequence of displacements and photon additions or subtractions adjusted to generate Schroedinger cat states of a larger size. The effects of detection inefficiency are taken into account.

  12. Atomic Schroedinger cat-like states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enriquez-Flores, Marco; Rosas-Ortiz, Oscar; Departamento de Fisica, Cinvestav, A.P. 14-740, Mexico D.F. 07000

    2010-10-11

    After a short overview of the basic mathematical structure of quantum mechanics we analyze the Schroedinger's antinomic example of a living and dead cat mixed in equal parts. Superpositions of Glauber kets are shown to approximate such macroscopic states. Then, two-level atomic states are used to construct mesoscopic kittens as appropriate linear combinations of angular momentum eigenkets for j = 1/2. Some general comments close the present contribution.

  13. Quantized expected returns in terms of dividend yield at the money

    NASA Astrophysics Data System (ADS)

    Dieng, Lamine

    2011-03-01

    We use the Bachelier (additive model) and the Black-Scholes (multiplicative model) as our models for the stock price movement for an investor who has entered into an America call option contract. We assume the investor to pay certain dividend yield on the expected rate of returns from buying stocks. In this work, we also assume the stock price to be initially in the out of the money state and eventually will move up through at the money state to the deep in the money state where the expected future payoffs and returns are positive for the stock holder. We call a singularity point at the money because the expected payoff vanishes at this point. Then, using martingale, supermartingale and Markov theories we obtain the Bachelier-type of the Black-Scholes and the Black-Scholes equations which we hedge in the limit where the change of the expected payoff of the call option is extremely small. Hence, by comparison we obtain the time-independent Schroedinger equation in Quantum Mechanics. We solve completely the time independent Schroedinger equation for both models to obtain the expected rate of returns and the expected payoffs for the stock holder at the money. We find the expected rate of returns to be quantized in terms of the dividend yield.

  14. Blow-up profile to the solutions of two-coupled Schroedinger equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Jianqing; Guo Boling

    2009-02-15

    The model of the following two-coupled Schroedinger equations, i{sub t}+(1/2){delta}u=(g{sub 11}|u|{sup 2p}+g|u|{sup p-1}|v|{sup p+1})uu, (t,x)(set-membership sign)R{sub +}xR{sup N}, and iv{sub t}+(1/2){delta}v=(g|u|{sup p+1}|v|{sup p-1}+g{sub 22}|v|{sup 2p})v, (t,x)(set-membership sign)R{sub +}xR{sup N}, is proposed in the study of the Bose-Einstein condensates [Mitchell, et al., ''Self-traping of partially spatially incoherent light,'' Phys. Rev. Lett. 77, 490 (1996)]. We prove that for suitable initial data and p the solution blows up exactly like {delta} function. As a by-product, we prove that similar phenomenon occurs for the critical two-coupled Schroedinger equations with harmonic potential [Perez-Garcia, V. M. and Beitia, T. B., ''Sybiotic solitons in heteronuclear multicomponentmore » Bose-Einstein condensates,'' Phys. Rev. A 72, 033620 (2005)], iu{sub t}+(1/2){delta}u=({omega}/2)|x|{sup 2}u+(g{sub 11}|u|{sup 2p}+g|u|{sup p-1}|v|{sup p+1})u, x(set-membership sign)R{sup N}, and iv{sub t}+(1/2){delta}v=({omega}/2)|x|{sup 2}v+(g|u|{sup p+1}|v|{sup p-1}+g{sub 22}|v|{sup 2p})v, x(set-membership sign)R{sup N}.« less

  15. Quantum theory of rotational isomerism and Hill equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ugulava, A.; Toklikishvili, Z.; Chkhaidze, S.

    2012-06-15

    The process of rotational isomerism of linear triatomic molecules is described by the potential with two different-depth minima and one barrier between them. The corresponding quantum-mechanical equation is represented in the form that is a special case of the Hill equation. It is shown that the Hill-Schroedinger equation has a Klein's quadratic group symmetry which, in its turn, contains three invariant subgroups. The presence of these subgroups makes it possible to create a picture of energy spectrum which depends on a parameter and has many merging and branch points. The parameter-dependent energy spectrum of the Hill-Schroedinger equation, like Mathieu-characteristics, containsmore » branch points from the left and from the right of the demarcation line. However, compared to the Mathieu-characteristics, in the Hill-Schroedinger equation spectrum the 'right' points are moved away even further for some distance that is the bigger, the bigger is the less deep well. The asymptotic wave functions of the Hill-Schroedinger equation for the energy values near the potential minimum contain two isolated sharp peaks indicating a possibility of the presence of two stable isomers. At high energy values near the potential maximum, the height of two peaks decreases, and between them there appear chaotic oscillations. This form of the wave functions corresponds to the process of isomerization.« less

  16. A general formula for Rayleigh-Schroedinger perturbation energy utilizing a power series expansion of the quantum mechanical Hamiltonian

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herbert, J.M.

    1997-02-01

    Perturbation theory has long been utilized by quantum chemists as a method for approximating solutions to the Schroedinger equation. Perturbation treatments represent a system`s energy as a power series in which each additional term further corrects the total energy; it is therefore convenient to have an explicit formula for the nth-order energy correction term. If all perturbations are collected into a single Hamiltonian operator, such a closed-form expression for the nth-order energy correction is well known; however, use of a single perturbed Hamiltonian often leads to divergent energy series, while superior convergence behavior is obtained by expanding the perturbed Hamiltonianmore » in a power series. This report presents a closed-form expression for the nth-order energy correction obtained using Rayleigh-Schroedinger perturbation theory and a power series expansion of the Hamiltonian.« less

  17. Direct perturbation theory for the dark soliton solution to the nonlinear Schroedinger equation with normal dispersion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu Jialu; Yang Chunnuan; Cai Hao

    2007-04-15

    After finding the basic solutions of the linearized nonlinear Schroedinger equation by the method of separation of variables, the perturbation theory for the dark soliton solution is constructed by linear Green's function theory. In application to the self-induced Raman scattering, the adiabatic corrections to the soliton's parameters are obtained and the remaining correction term is given as a pure integral with respect to the continuous spectral parameter.

  18. Scattering transform for nonstationary Schroedinger equation with bidimensionally perturbed N-soliton potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boiti, M.; Pempinelli, F.; Pogrebkov, A. K.

    2006-12-15

    In the framework of the extended resolvent approach the direct and inverse scattering problems for the nonstationary Schroedinger equation with a potential being a perturbation of the N-soliton potential by means of a generic bidimensional smooth function decaying at large spaces are introduced and investigated. The initial value problem of the Kadomtsev-Petviashvili I equation for a solution describing N wave solitons on a generic smooth decaying background is then linearized, giving the time evolution of the spectral data.

  19. A Heuristic Fast Method to Solve the Nonlinear Schroedinger Equation in Fiber Bragg Gratings with Arbitrary Shape Input Pulse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emami, F.; Hatami, M.; Keshavarz, A. R.

    2009-08-13

    Using a combination of Runge-Kutta and Jacobi iterative method, we could solve the nonlinear Schroedinger equation describing the pulse propagation in FBGs. By decomposing the electric field to forward and backward components in fiber Bragg grating and utilizing the Fourier series analysis technique, the boundary value problem of a set of coupled equations governing the pulse propagation in FBG changes to an initial condition coupled equations which can be solved by simple Runge-Kutta method.

  20. Hidden Statistics Approach to Quantum Simulations

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2010-01-01

    Recent advances in quantum information theory have inspired an explosion of interest in new quantum algorithms for solving hard computational (quantum and non-quantum) problems. The basic principle of quantum computation is that the quantum properties can be used to represent structure data, and that quantum mechanisms can be devised and built to perform operations with this data. Three basic non-classical properties of quantum mechanics superposition, entanglement, and direct-product decomposability were main reasons for optimism about capabilities of quantum computers that promised simultaneous processing of large massifs of highly correlated data. Unfortunately, these advantages of quantum mechanics came with a high price. One major problem is keeping the components of the computer in a coherent state, as the slightest interaction with the external world would cause the system to decohere. That is why the hardware implementation of a quantum computer is still unsolved. The basic idea of this work is to create a new kind of dynamical system that would preserve the main three properties of quantum physics superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. In other words, such a system would reinforce the advantages and minimize limitations of both quantum and classical aspects. Based upon a concept of hidden statistics, a new kind of dynamical system for simulation of Schroedinger equation is proposed. The system represents a modified Madelung version of Schroedinger equation. It preserves superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. Such an optimal combination of characteristics is a perfect match for simulating quantum systems. The model includes a transitional component of quantum potential (that has been overlooked in previous treatment of the Madelung equation). The role of the transitional potential is to provide a jump from a deterministic state to a random state with prescribed probability density. This jump is triggered by blowup instability due to violation of Lipschitz condition generated by the quantum potential. As a result, the dynamics attains quantum properties on a classical scale. The model can be implemented physically as an analog VLSI-based (very-large-scale integration-based) computer, or numerically on a digital computer. This work opens a way of developing fundamentally new algorithms for quantum simulations of exponentially complex problems that expand NASA capabilities in conducting space activities. It has been illustrated that the complexity of simulations of particle interaction can be reduced from an exponential one to a polynomial one.

  1. Solving the Schroedinger Equation of Atoms and Molecules without Analytical Integration Based on the Free Iterative-Complement-Interaction Wave Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakatsuji, H.; Nakashima, H.; Department of Synthetic Chemistry and Biological Chemistry, Graduate School of Engineering, Kyoto University, Nishikyo-ku, Kyoto 615-8510

    2007-12-14

    A local Schroedinger equation (LSE) method is proposed for solving the Schroedinger equation (SE) of general atoms and molecules without doing analytic integrations over the complement functions of the free ICI (iterative-complement-interaction) wave functions. Since the free ICI wave function is potentially exact, we can assume a flatness of its local energy. The variational principle is not applicable because the analytic integrations over the free ICI complement functions are very difficult for general atoms and molecules. The LSE method is applied to several 2 to 5 electron atoms and molecules, giving an accuracy of 10{sup -5} Hartree in total energy.more » The potential energy curves of H{sub 2} and LiH molecules are calculated precisely with the free ICI LSE method. The results show the high potentiality of the free ICI LSE method for developing accurate predictive quantum chemistry with the solutions of the SE.« less

  2. Derivation of an applied nonlinear Schroedinger equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pitts, Todd Alan; Laine, Mark Richard; Schwarz, Jens

    We derive from first principles a mathematical physics model useful for understanding nonlinear optical propagation (including filamentation). All assumptions necessary for the development are clearly explained. We include the Kerr effect, Raman scattering, and ionization (as well as linear and nonlinear shock, diffraction and dispersion). We explain the phenomenological sub-models and each assumption required to arrive at a complete and consistent theoretical description. The development includes the relationship between shock and ionization and demonstrates why inclusion of Drude model impedance effects alters the nature of the shock operator. Unclassified Unlimited Release

  3. Quantum theory and chemistry: Two propositions

    NASA Technical Reports Server (NTRS)

    Aronowitz, S.

    1980-01-01

    Two propositions concerning quantum chemistry are proposed. First, it is proposed that the nonrelativistic Schroedinger equation, where the Hamiltonian operator is associated with an assemblage of nuclei and electrons, can never be arranged to yield specific molecules in the chemists' sense. It is argued that this result is a necessary condition if the Schroedinger has relevancy to chemistry. Second, once a system is in a particular state with regard to interactions among its components (the assemblage of nuclei and electrons), it cannot spontaneously eliminate any of those interactions. This leads to a subtle form of irreversibility.

  4. The 'hard problem' and the quantum physicists. Part 1: the first generation.

    PubMed

    Smith, C U M

    2006-07-01

    All four of the most important figures in the early twentieth-century development of quantum physics-Niels Bohr, Erwin Schroedinger, Werner Heisenberg and Wolfgang Pauli-had strong interests in the traditional mind-brain, or 'hard,' problem. This paper reviews their approach to this problem, showing the influence of Bohr's complementarity thesis, the significance of Schroedinger's small book, 'What is life?,' the updated Platonism of Heisenberg and, perhaps most interesting of all, the interaction of Carl Jung and Wolfgang Pauli in the latter's search for a unification of mind and matter.

  5. Some rules for polydimensional squeezing

    NASA Technical Reports Server (NTRS)

    Manko, Vladimir I.

    1994-01-01

    The review of the following results is presented: For mixed state light of N-mode electromagnetic field described by Wigner function which has generic Gaussian form, the photon distribution function is obtained and expressed explicitly in terms of Hermite polynomials of 2N-variables. The momenta of this distribution are calculated and expressed as functions of matrix invariants of the dispersion matrix. The role of new uncertainty relation depending on photon state mixing parameter is elucidated. New sum rules for Hermite polynomials of several variables are found. The photon statistics of polymode even and odd coherent light and squeezed polymode Schroedinger cat light is given explicitly. Photon distribution for polymode squeezed number states expressed in terms of multivariable Hermite polynomials is discussed.

  6. Shock Waves in a Bose-Einstein Condensate

    NASA Technical Reports Server (NTRS)

    Kulikov, Igor; Zak, Michail

    2005-01-01

    A paper presents a theoretical study of shock waves in a trapped Bose-Einstein condensate (BEC). The mathematical model of the BEC in this study is a nonlinear Schroedinger equation (NLSE) in which (1) the role of the wave function of a single particle in the traditional Schroedinger equation is played by a space- and time-dependent complex order parameter (x,t) proportional to the square root of the density of atoms and (2) the atoms engage in a repulsive interaction characterized by a potential proportional to | (x,t)|2. Equations that describe macroscopic perturbations of the BEC at zero temperature are derived from the NLSE and simplifying assumptions are made, leading to equations for the propagation of sound waves and the transformation of sound waves into shock waves. Equations for the speeds of shock waves and the relationships between jumps of velocity and density across shock fronts are derived. Similarities and differences between this theory and the classical theory of sound waves and shocks in ordinary gases are noted. The present theory is illustrated by solving the equations for the example of a shock wave propagating in a cigar-shaped BEC.

  7. A full vectorial generalized discontinuous Galerkin beam propagation method (GDG-BPM) for nonsmooth electromagnetic fields in waveguides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan Kai; Cai Wei; Ji Xia

    2008-07-20

    In this paper, we propose a new full vectorial generalized discontinuous Galerkin beam propagation method (GDG-BPM) to accurately handle the discontinuities in electromagnetic fields associated with wave propagations in inhomogeneous optical waveguides. The numerical method is a combination of the traditional beam propagation method (BPM) with a newly developed generalized discontinuous Galerkin (GDG) method [K. Fan, W. Cai, X. Ji, A generalized discontinuous Galerkin method (GDG) for Schroedinger equations with nonsmooth solutions, J. Comput. Phys. 227 (2008) 2387-2410]. The GDG method is based on a reformulation, using distributional variables to account for solution jumps across material interfaces, of Schroedinger equationsmore » resulting from paraxial approximations of vector Helmholtz equations. Four versions of the GDG-BPM are obtained for either the electric or magnetic field components. Modeling of wave propagations in various optical fibers using the full vectorial GDG-BPM is included. Numerical results validate the high order accuracy and the flexibility of the method for various types of interface jump conditions.« less

  8. On the heat trace of Schroedinger operators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banuelos, R.; Sa Barreto, A.

    1995-12-31

    Trace formulae for heat kernels of Schroedinger operators have been widely studied in connection with spectral and scattering theory. They have been used to obtain information about a potential from its spectrum, or from its scattering data, and vice-versa. Using elementary Fourier transform methods we obtain a formula for the general coefficient in the asymptotic expansion of the trace of the heat kernel of the Schroedinger operator {minus}{Delta} + V, as t {down_arrow} 0, with V {element_of} S(R{sup n}), the class of functions with rapid decay at infinity. In dimension n = 1 a recurrent formula for the general coefficientmore » in the expansion is obtained in [6]. However the KdV methods used there do not seem to generalize to higher dimension. Using the formula of [6] and the symmetry of some integrals, Y. Colin de Verdiere has computed the first four coefficients for potentials in three space dimensions. Also in [1] a different method is used to compute heat coefficients for differential operators on manifolds. 14 refs.« less

  9. Solving the electron and electron-nuclear Schroedinger equations for the excited states of helium atom with the free iterative-complement-interaction method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakashima, Hiroyuki; Hijikata, Yuh; Nakatsuji, Hiroshi

    2008-04-21

    Very accurate variational calculations with the free iterative-complement-interaction (ICI) method for solving the Schroedinger equation were performed for the 1sNs singlet and triplet excited states of helium atom up to N=24. This is the first extensive applications of the free ICI method to the calculations of excited states to very high levels. We performed the calculations with the fixed-nucleus Hamiltonian and moving-nucleus Hamiltonian. The latter case is the Schroedinger equation for the electron-nuclear Hamiltonian and includes the quantum effect of nuclear motion. This solution corresponds to the nonrelativistic limit and reproduced the experimental values up to five decimal figures. Themore » small differences from the experimental values are not at all the theoretical errors but represent the physical effects that are not included in the present calculations, such as relativistic effect, quantum electrodynamic effect, and even the experimental errors. The present calculations constitute a small step toward the accurately predictive quantum chemistry.« less

  10. General method of solving the Schroedinger equation of atoms and molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakatsuji, Hiroshi

    2005-12-15

    We propose a general method of solving the Schroedinger equation of atoms and molecules. We first construct the wave function having the exact structure, using the ICI (iterative configuration or complement interaction) method and then optimize the variables involved by the variational principle. Based on the scaled Schroedinger equation and related principles, we can avoid the singularity problem of atoms and molecules and formulate a general method of calculating the exact wave functions in an analytical expansion form. We choose initial function {psi}{sub 0} and scaling g function, and then the ICI method automatically generates the wave function that hasmore » the exact structure by using the Hamiltonian of the system. The Hamiltonian contains all the information of the system. The free ICI method provides a flexible and variationally favorable procedure of constructing the exact wave function. We explain the computational procedure of the analytical ICI method routinely performed in our laboratory. Simple examples are given using hydrogen atom for the nuclear singularity case, the Hooke's atom for the electron singularity case, and the helium atom for both cases.« less

  11. Positive phase space distributions and uncertainty relations

    NASA Technical Reports Server (NTRS)

    Kruger, Jan

    1993-01-01

    In contrast to a widespread belief, Wigner's theorem allows the construction of true joint probabilities in phase space for distributions describing the object system as well as for distributions depending on the measurement apparatus. The fundamental role of Heisenberg's uncertainty relations in Schroedinger form (including correlations) is pointed out for these two possible interpretations of joint probability distributions. Hence, in order that a multivariate normal probability distribution in phase space may correspond to a Wigner distribution of a pure or a mixed state, it is necessary and sufficient that Heisenberg's uncertainty relation in Schroedinger form should be satisfied.

  12. Exponential Methods for the Time Integration of Schroedinger Equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cano, B.; Gonzalez-Pachon, A.

    2010-09-30

    We consider exponential methods of second order in time in order to integrate the cubic nonlinear Schroedinger equation. We are interested in taking profit of the special structure of this equation. Therefore, we look at symmetry, symplecticity and approximation of invariants of the proposed methods. That will allow to integrate till long times with reasonable accuracy. Computational efficiency is also our aim. Therefore, we make numerical computations in order to compare the methods considered and so as to conclude that explicit Lawson schemes projected on the norm of the solution are an efficient tool to integrate this equation.

  13. A new perspective on Quantum Finance using the Black-Scholes pricing model

    NASA Astrophysics Data System (ADS)

    Dieng, Lamine

    2007-03-01

    Options are known to be divided into two types, the first type is called a call option and the second type is called a put option and these options are offered to stock holders in order to hedge their positions against risky fluctuations of the stock price. It is important to mention that due to fluctuations of the stock price, options can be found sometimes deep in the money, at the money and out of the money. A deep in the money option is described when the option's holder has a positive expected payoff, at the money option is when the option's holder has a zero expected payoff and an out of the money option is when the payoff is negative. In this work, we will assume the stock price to be described by the well known Black-Scholes model or sometimes called the multiplicative model. Using Ito calculus, Martingale and supermartingale theories, we investigated the Black-Scholes pricing equation at the money (X(stock price)= K (strike price)) when the expected payoff of the options holder is zero. We also hedged the Black-Scholes pricing equation in the limit when delta is zero to obtain the non-relativistic time independent Schroedinger equation in quantum mechanics. We compared the two equations and found the diffusion constant to be a function of the stock price in contrast to the Bachelier model we have worked on earlier. We solved the Schroedinger equation and found a dependence between interest rate, volatility and strike price at the money.

  14. Observation of Quasi-Two-Dimensional Nonlinear Interactions in a Drift-Wave Streamer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamada, T.; Nagashima, Y.; Itoh, S.-I.

    2010-11-26

    A streamer, which is a bunching of drift-wave fluctuations, and its mediator, which generates the streamer by coupling with other fluctuations, have been observed in a cylindrical magnetized plasma. Their radial structures were investigated in detail by using the biphase analysis. Their quasi-two-dimensional structures were revealed to be equivalent with a pair of fast and slow modes predicted by a nonlinear Schroedinger equation based on the Hasegawa-Mima model.

  15. Some Exact Results for the Schroedinger Wave Equation with a Time Dependent Potential

    NASA Technical Reports Server (NTRS)

    Campbell, Joel

    2009-01-01

    The time dependent Schroedinger equation with a time dependent delta function potential is solved exactly for many special cases. In all other cases the problem can be reduced to an integral equation of the Volterra type. It is shown that by knowing the wave function at the origin, one may derive the wave function everywhere. Thus, the problem is reduced from a PDE in two variables to an integral equation in one. These results are used to compare adiabatic versus sudden changes in the potential. It is shown that adiabatic changes in the p otential lead to conservation of the normalization of the probability density.

  16. Analysis of a semiclassical model for rotational transition probabilities. [in highly nonequilibrium flow of diatomic molecules

    NASA Technical Reports Server (NTRS)

    Deiwert, G. S.; Yoshikawa, K. K.

    1975-01-01

    A semiclassical model proposed by Pearson and Hansen (1974) for computing collision-induced transition probabilities in diatomic molecules is tested by the direct-simulation Monte Carlo method. Specifically, this model is described by point centers of repulsion for collision dynamics, and the resulting classical trajectories are used in conjunction with the Schroedinger equation for a rigid-rotator harmonic oscillator to compute the rotational energy transition probabilities necessary to evaluate the rotation-translation exchange phenomena. It is assumed that a single, average energy spacing exists between the initial state and possible final states for a given collision.

  17. Conformal mapping and bound states in bent waveguides

    NASA Astrophysics Data System (ADS)

    Sadurní, E.; Schleich, W. P.

    2010-12-01

    Is it possible to trap a quantum particle in an open geometry? In this work we deal with the boundary value problem of the stationary Schroedinger (or Helmholtz) equation within a waveguide with straight segments and a rectangular bending. The problem can be reduced to a one-dimensional matrix Schroedinger equation using two descriptions: oblique modes and conformal coordinates. We use a corner-corrected WKB formalism to find the energies of the one-dimensional problem. It is shown that the presence of bound states is an effect due to the boundary alone, with no classical counterpart for this geometry. The conformal description proves to be simpler, as the coupling of transversal modes is not essential in this case.

  18. Rogue waves in terms of multi-point statistics and nonequilibrium thermodynamics

    NASA Astrophysics Data System (ADS)

    Hadjihosseini, Ali; Lind, Pedro; Mori, Nobuhito; Hoffmann, Norbert P.; Peinke, Joachim

    2017-04-01

    Ocean waves, which lead to rogue waves, are investigated on the background of complex systems. In contrast to deterministic approaches based on the nonlinear Schroedinger equation or focusing effects, we analyze this system in terms of a noisy stochastic system. In particular we present a statistical method that maps the complexity of multi-point data into the statistics of hierarchically ordered height increments for different time scales. We show that the stochastic cascade process with Markov properties is governed by a Fokker-Planck equation. Conditional probabilities as well as the Fokker-Planck equation itself can be estimated directly from the available observational data. This stochastic description enables us to show several new aspects of wave states. Surrogate data sets can in turn be generated allowing to work out different statistical features of the complex sea state in general and extreme rogue wave events in particular. The results also open up new perspectives for forecasting the occurrence probability of extreme rogue wave events, and even for forecasting the occurrence of individual rogue waves based on precursory dynamics. As a new outlook the ocean wave states will be considered in terms of nonequilibrium thermodynamics, for which the entropy production of different wave heights will be considered. We show evidence that rogue waves are characterized by negative entropy production. The statistics of the entropy production can be used to distinguish different wave states.

  19. Solving the Schroedinger equation for helium atom and its isoelectronic ions with the free iterative complement interaction (ICI) method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakashima, Hiroyuki; Nakatsuji, Hiroshi

    2007-12-14

    The Schroedinger equation was solved very accurately for helium atom and its isoelectronic ions (Z=1-10) with the free iterative complement interaction (ICI) method followed by the variational principle. We obtained highly accurate wave functions and energies of helium atom and its isoelectronic ions. For helium, the calculated energy was -2.903 724 377 034 119 598 311 159 245 194 404 446 696 905 37 a.u., correct over 40 digit accuracy, and for H{sup -}, it was -0.527 751 016 544 377 196 590 814 566 747 511 383 045 02 a.u. These results prove numerically that with the free ICImore » method, we can calculate the solutions of the Schroedinger equation as accurately as one desires. We examined several types of scaling function g and initial function {psi}{sub 0} of the free ICI method. The performance was good when logarithm functions were used in the initial function because the logarithm function is physically essential for three-particle collision area. The best performance was obtained when we introduce a new logarithm function containing not only r{sub 1} and r{sub 2} but also r{sub 12} in the same logarithm function.« less

  20. Ab initio calculation of proton-coupled electron transfer rates using the external-potential representation: A ubiquinol complex in solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamamoto, Takeshi; Kato, Shigeki

    2007-06-14

    In quantum-mechanical/molecular-mechanical (QM/MM) treatment of chemical reactions in condensed phases, one solves the electronic Schroedinger equation for the solute (or an active site) under the electrostatic field from the environment. This Schroedinger equation depends parametrically on the solute nuclear coordinates R and the external electrostatic potential V. This fact suggests that one may use R and V as natural collective coordinates for describing the entire system, where V plays the role of collective solvent variables. In this paper such an (R,V) representation of the QM/MM canonical ensemble is described, with particular focus on how to treat charge transfer processes inmore » this representation. As an example, the above method is applied to the proton-coupled electron transfer of a ubiquinol analog with phenoxyl radical in acetonitrile solvent. Ab initio free-energy surfaces are calculated as functions of R and V using the reference interaction site model self-consistent field method, the equilibrium points and the minimum free-energy crossing point are located in the (R,V) space, and then the kinetic isotope effects (KIEs) are evaluated approximately. The results suggest that a stiffer proton potential at the transition state may be responsible for unusual KIEs observed experimentally for related systems.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubrovsky, V. G.; Topovsky, A. V.

    New exact solutions, nonstationary and stationary, of Veselov-Novikov (VN) equation in the forms of simple nonlinear and linear superpositions of arbitrary number N of exact special solutions u{sup (n)}, n= 1, Horizontal-Ellipsis , N are constructed via Zakharov and Manakov {partial_derivative}-dressing method. Simple nonlinear superpositions are represented up to a constant by the sums of solutions u{sup (n)} and calculated by {partial_derivative}-dressing on nonzero energy level of the first auxiliary linear problem, i.e., 2D stationary Schroedinger equation. It is remarkable that in the zero energy limit simple nonlinear superpositions convert to linear ones in the form of the sums ofmore » special solutions u{sup (n)}. It is shown that the sums u=u{sup (k{sub 1})}+...+u{sup (k{sub m})}, 1 Less-Than-Or-Slanted-Equal-To k{sub 1} < k{sub 2} < Horizontal-Ellipsis < k{sub m} Less-Than-Or-Slanted-Equal-To N of arbitrary subsets of these solutions are also exact solutions of VN equation. The presented exact solutions include as superpositions of special line solitons and also superpositions of plane wave type singular periodic solutions. By construction these exact solutions represent also new exact transparent potentials of 2D stationary Schroedinger equation and can serve as model potentials for electrons in planar structures of modern electronics.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jentschura, Ulrich D.; National Institute of Standards and Technology, Gaithersburg, Maryland 20899-8401; Mohr, Peter J.

    We describe the calculation of hydrogenic (one-loop) Bethe logarithms for all states with principal quantum numbers n{<=}200. While, in principle, the calculation of the Bethe logarithm is a rather easy computational problem involving only the nonrelativistic (Schroedinger) theory of the hydrogen atom, certain calculational difficulties affect highly excited states, and in particular states for which the principal quantum number is much larger than the orbital angular momentum quantum number. Two evaluation methods are contrasted. One of these is based on the calculation of the principal value of a specific integral over a virtual photon energy. The other method relies directlymore » on the spectral representation of the Schroedinger-Coulomb propagator. Selected numerical results are presented. The full set of values is available at arXiv.org/quant-ph/0504002.« less

  3. A comparative study of Laplacians and Schroedinger- Lichnerowicz-Weitzenboeck identities in Riemannian and antisymplectic geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Batalin, Igor A.; I.E. Tamm Theory Division, P.N. Lebedev Physics Institute, Russian Academy of Sciences, 53 Leninsky Prospect, Moscow 119991; Bering, Klaus

    2009-07-15

    We introduce an antisymplectic Dirac operator and antisymplectic gamma matrices. We explore similarities between, on one hand, the Schroedinger-Lichnerowicz formula for spinor bundles in Riemannian spin geometry, which contains a zeroth-order term proportional to the Levi-Civita scalar curvature, and, on the other hand, the nilpotent, Grassmann-odd, second-order {delta} operator in antisymplectic geometry, which, in general, has a zeroth-order term proportional to the odd scalar curvature of an arbitrary antisymplectic and torsion-free connection that is compatible with the measure density. Finally, we discuss the close relationship with the two-loop scalar curvature term in the quantum Hamiltonian for a particle in amore » curved Riemannian space.« less

  4. Fractional analysis for nonlinear electrical transmission line and nonlinear Schroedinger equations with incomplete sub-equation

    NASA Astrophysics Data System (ADS)

    Fendzi-Donfack, Emmanuel; Nguenang, Jean Pierre; Nana, Laurent

    2018-02-01

    We use the fractional complex transform with the modified Riemann-Liouville derivative operator to establish the exact and generalized solutions of two fractional partial differential equations. We determine the solutions of fractional nonlinear electrical transmission lines (NETL) and the perturbed nonlinear Schroedinger (NLS) equation with the Kerr law nonlinearity term. The solutions are obtained for the parameters in the range (0<α≤1) of the derivative operator and we found the traditional solutions for the limiting case of α =1. We show that according to the modified Riemann-Liouville derivative, the solutions found can describe physical systems with memory effect, transient effects in electrical systems and nonlinear transmission lines, and other systems such as optical fiber.

  5. Quantum spatial propagation of squeezed light in a degenerate parametric amplifier

    NASA Technical Reports Server (NTRS)

    Deutsch, Ivan H.; Garrison, John C.

    1992-01-01

    Differential equations which describe the steady state spatial evolution of nonclassical light are established using standard quantum field theoretic techniques. A Schroedinger equation for the state vector of the optical field is derived using the quantum analog of the slowly varying envelope approximation (SVEA). The steady state solutions are those that satisfy the time independent Schroedinger equation. The resulting eigenvalue problem then leads to the spatial propagation equations. For the degenerate parametric amplifier this method shows that the squeezing parameter obey nonlinear differential equations coupled by the amplifier gain and phase mismatch. The solution to these differential equations is equivalent to one obtained from the classical three wave mixing steady state solution to the parametric amplifier with a nondepleted pump.

  6. Numerical analysis of spectral properties of coupled oscillator Schroedinger operators. I - Single and double well anharmonic oscillators

    NASA Technical Reports Server (NTRS)

    Isaacson, D.; Isaacson, E. L.; Paes-Leme, P. J.; Marchesin, D.

    1981-01-01

    Several methods for computing many eigenvalues and eigenfunctions of a single anharmonic oscillator Schroedinger operator whose potential may have one or two minima are described. One of the methods requires the solution of an ill-conditioned generalized eigenvalue problem. This method has the virtue of using a bounded amount of work to achieve a given accuracy in both the single and double well regions. Rigorous bounds are given, and it is proved that the approximations converge faster than any inverse power of the size of the matrices needed to compute them. The results of computations for the g:phi(4):1 theory are presented. These results indicate that the methods actually converge exponentially fast.

  7. Symbolic derivation of high-order Rayleigh-Schroedinger perturbation energies using computer algebra: Application to vibrational-rotational analysis of diatomic molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herbert, John M.

    1997-01-01

    Rayleigh-Schroedinger perturbation theory is an effective and popular tool for describing low-lying vibrational and rotational states of molecules. This method, in conjunction with ab initio techniques for computation of electronic potential energy surfaces, can be used to calculate first-principles molecular vibrational-rotational energies to successive orders of approximation. Because of mathematical complexities, however, such perturbation calculations are rarely extended beyond the second order of approximation, although recent work by Herbert has provided a formula for the nth-order energy correction. This report extends that work and furnishes the remaining theoretical details (including a general formula for the Rayleigh-Schroedinger expansion coefficients) necessary formore » calculation of energy corrections to arbitrary order. The commercial computer algebra software Mathematica is employed to perform the prohibitively tedious symbolic manipulations necessary for derivation of generalized energy formulae in terms of universal constants, molecular constants, and quantum numbers. As a pedagogical example, a Hamiltonian operator tailored specifically to diatomic molecules is derived, and the perturbation formulae obtained from this Hamiltonian are evaluated for a number of such molecules. This work provides a foundation for future analyses of polyatomic molecules, since it demonstrates that arbitrary-order perturbation theory can successfully be applied with the aid of commercially available computer algebra software.« less

  8. Lattice Simulations in MOM v.s. Schroedinger Functional Scheme and Triality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furui, Sadataka

    The QCD beta function extracted from polarized electron proton scattering data obtained at JLab and the lattice simulation in the MOM scheme suggest that the critical flavor number for the presence of IR fixed point is about three. In analyses of Schroedinger functional scheme, however, critical flavor number for the presence of IR fixed point and the conformality is larger than nine.In the QCD analysis, when quarks are expressed in quaternion basis, the product of quaternions are expressed by octonions and the octonion posesses the triality symmetry. Since the triality has the effect of multiplying the falvor number, it couldmore » explain the apparent large critical flavor number in the Schroedinger functinal scheme. In this scheme, larger degrees of freedom in adjusting data of different scales on the boundary are necessary than in the MOM scheme.In weak interaction, there is no clear lepton-flavor violation except in the neutrino oscillation. If the triality is assigned to the lepton flavors(e,{mu} and {tau}) and they are assumed to be exact symmetry, or the electro-magnetic interaction preserves tiality, but the strong interaction is triality blind, there is a possibility of explaining the neutrino oscillation through triality mixing of the matter field.The self energy of gluons, ghost and gauge bosons due to self-dual gauge fields and leptonic decays of B,D and D{sub s} mesons are discussed.« less

  9. Absorbing boundaries in numerical solutions of the time-dependent Schroedinger equation on a grid using exterior complex scaling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, F.; Ruiz, C.; Becker, A.

    We study the suppression of reflections in the numerical simulation of the time-dependent Schroedinger equation for strong-field problems on a grid using exterior complex scaling (ECS) as an absorbing boundary condition. It is shown that the ECS method can be applied in both the length and the velocity gauge as long as appropriate approximations are applied in the ECS transformation of the electron-field coupling. It is found that the ECS method improves the suppression of reflection as compared to the conventional masking function technique in typical simulations of atoms exposed to an intense laser pulse. Finally, we demonstrate the advantagemore » of the ECS technique to avoid unphysical artifacts in the evaluation of high harmonic spectra.« less

  10. Analytical solutions of the Schroedinger equation for a two-dimensional exciton in magnetic field of arbitrary strength

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoang-Do, Ngoc-Tram; Hoang, Van-Hung; Le, Van-Hoang

    2013-05-15

    The Feranchuk-Komarov operator method is developed by combining with the Levi-Civita transformation in order to construct analytical solutions of the Schroedinger equation for a two-dimensional exciton in a uniform magnetic field of arbitrary strength. As a result, analytical expressions for the energy of the ground and excited states are obtained with a very high precision of up to four decimal places. Especially, the precision is uniformly stable for the whole range of the magnetic field. This advantage appears due to the consideration of the asymptotic behaviour of the wave-functions in strong magnetic field. The results could be used for variousmore » physical analyses and the method used here could also be applied to other atomic systems.« less

  11. Formation of quasiparallel Alfven solitons

    NASA Technical Reports Server (NTRS)

    Hamilton, R. L.; Kennel, C. F.; Mjolhus, E.

    1992-01-01

    The formation of quasi-parallel Alfven solitons is investigated through the inverse scattering transformation (IST) for the derivative nonlinear Schroedinger (DNLS) equation. The DNLS has a rich complement of soliton solutions consisting of a two-parameter soliton family and a one-parameter bright/dark soliton family. In this paper, the physical roles and origins of these soliton families are inferred through an analytic study of the scattering data generated by the IST for a set of initial profiles. The DNLS equation has as limiting forms the nonlinear Schroedinger (NLS), Korteweg-de-Vries (KdV) and modified Korteweg-de-Vries (MKdV) equations. Each of these limits is briefly reviewed in the physical context of quasi-parallel Alfven waves. The existence of these limiting forms serves as a natural framework for discussing the formation of Alfven solitons.

  12. Question 1: origin of life and the living state.

    PubMed

    Kauffman, Stuart

    2007-10-01

    The aim of this article is to discuss four topics: First, the origin of molecular reproduction. Second, the origin of agency - the capacity of a system to act on its own behalf. Agency is a stunning feature of human and some wider range of life. Third, to discuss a still poorly articulated feature of life noticed by the philosopher Immanuel Kant over 200 years ago: A self propagating organization of process. We have no theory for this aspect of life, yet it is central to life. Fourth, I will discuss constraints, as in Schroedinger's aperiodic crystal (Schroedinger E, What is life? The physical aspect of the living cell, 1944), as information, part of the total non-equilibrium union of matter, energy, work, work cycles, constraints, and information that appear to comprise the living state.

  13. Linear and nonlinear propagation of water wave groups

    NASA Technical Reports Server (NTRS)

    Pierson, W. J., Jr.; Donelan, M. A.; Hui, W. H.

    1992-01-01

    Results are presented from a study of the evolution of waveforms with known analytical group shapes, in the form of both transient wave groups and the cloidal (cn) and dnoidal (dn) wave trains as derived from the nonlinear Schroedinger equation. The waveforms were generated in a long wind-wave tank of the Canada Centre for Inland Waters. It was found that the low-amplitude transients behaved as predicted by the linear theory and that the cn and dn wave trains of moderate steepness behaved almost as predicted by the nonlinear Schroedinger equation. Some of the results did not fit into any of the available theories for waves on water, but they provide important insight on how actual groups of waves propagate and on higher-order effects for a transient waveform.

  14. Physical realization of quantum teleportation for a nonmaximal entangled state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tanaka, Yoshiharu; Asano, Masanari; Ohya, Masanori

    2010-08-15

    Recently, Kossakowski and Ohya (K-O) proposed a new teleportation scheme which enables perfect teleportation even for a nonmaximal entangled state [A. Kossakowski and M. Ohya, Infinite Dimensional Analysis Quantum Probability and Related Topics 10, 411 (2007)]. To discuss a physical realization of the K-O scheme, we propose a model based on quantum optics. In our model, we take a superposition of Schroedinger's cat states as an input state being sent from Alice to Bob, and their entangled state is generated by a photon number state through a beam splitter. When the average photon number for our input states is equalmore » to half the number of photons into the beam splitter, our model has high fidelity.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shore, B.W.; Knight, P.L.

    The Jaynes-Cummings Model (JCM), a soluble fully quantum mechanical model of an atom in a field, was first used (in 1963) to examine the classical aspects of spontaneous emission and to reveal the existence of Rabi oscillations in atomic excitation probability for fields with sharply defined energy (or photon number). For fields having a statistical distributions of photon numbers the oscillations collapse to an expected steady value. In 1980 it was discovered that with appropriate initial conditions (e.g. a near-classical field), the Rabi oscillations would eventually revive -- only to collapse and revive repeatedly in a complicated pattern. The existencemore » of these revivals, present in the analytic solutions of the JCM, provided direct evidence for discreteness of field excitation (photons) and hence for the truly quantum nature of radiation. Subsequent study revealed further nonclassical properties of the JCM field, such as a tendency of the photons to antibunch. Within the last two years it has been found that during the quiescent intervals of collapsed Rabi oscillations the atom and field exist in a macroscopic superposition state (a Schroedinger cat). This discovery offers the opportunity to use the JCM to elucidate the basic properties of quantum correlation (entanglement) and to explore still further the relationship between classical and quantum physics. In tribute to E. D. Jaynes, who first recognized the importance of the JCM for clarifying the differences and similarities between quantum and classical physics, we here present an overview of the theory of the JCM and some of the many remarkable discoveries about it.« less

  16. Mathematical nonlinear optics

    NASA Astrophysics Data System (ADS)

    McLaughlin, David W.

    1995-08-01

    The principal investigator, together with a post-doctoral fellows Tetsuji Ueda and Xiao Wang, several graduate students, and colleagues, has applied the modern mathematical theory of nonlinear waves to problems in nonlinear optics and to equations directly relevant to nonlinear optics. Projects included the interaction of laser light with nematic liquid crystals and chaotic, homoclinic, small dispersive, and random behavior of solutions of the nonlinear Schroedinger equation. In project 1, the extremely strong nonlinear response of a continuous wave laser beam in a nematic liquid crystal medium has produced striking undulation and filamentation of the laser beam which has been observed experimentally and explained theoretically. In project 2, qualitative properties of the nonlinear Schroedinger equation (which is the fundamental equation for nonlinear optics) have been identified and studied. These properties include optical shocking behavior in the limit of very small dispersion, chaotic and homoclinic behavior in discretizations of the partial differential equation, and random behavior.

  17. Operator based integration of information in multimodal radiological search mission with applications to anomaly detection

    NASA Astrophysics Data System (ADS)

    Benedetto, J.; Cloninger, A.; Czaja, W.; Doster, T.; Kochersberger, K.; Manning, B.; McCullough, T.; McLane, M.

    2014-05-01

    Successful performance of radiological search mission is dependent on effective utilization of mixture of signals. Examples of modalities include, e.g., EO imagery and gamma radiation data, or radiation data collected during multiple events. In addition, elevation data or spatial proximity can be used to enhance the performance of acquisition systems. State of the art techniques in processing and exploitation of complex information manifolds rely on diffusion operators. Our approach involves machine learning techniques based on analysis of joint data- dependent graphs and their associated diffusion kernels. Then, the significant eigenvectors of the derived fused graph Laplace and Schroedinger operators form the new representation, which provides integrated features from the heterogeneous input data. The families of data-dependent Laplace and Schroedinger operators on joint data graphs, shall be integrated by means of appropriately designed fusion metrics. These fused representations are used for target and anomaly detection.

  18. Degenerate RS perturbation theory. [Rayleigh-Schroedinger energies and wave functions

    NASA Technical Reports Server (NTRS)

    Hirschfelder, J. O.; Certain, P. R.

    1974-01-01

    A concise, systematic procedure is given for determining the Rayleigh-Schroedinger energies and wave functions of degenerate states to arbitrarily high orders even when the degeneracies of the various states are resolved in arbitrary orders. The procedure is expressed in terms of an iterative cycle in which the energy through the (2n + 1)-th order is expressed in terms of the partially determined wave function through the n-th order. Both a direct and an operator derivation are given. The two approaches are equivalent and can be transcribed into each other. The direct approach deals with the wave functions (without the use of formal operators) and has the advantage that it resembles the usual treatment of nondegenerate perturbations and maintains close contact with the basic physics. In the operator approach, the wave functions are expressed in terms of infinite-order operators which are determined by the successive resolution of the space of the zeroth-order functions.

  19. Experimental demonstration of a quantum annealing algorithm for the traveling salesman problem in a nuclear-magnetic-resonance quantum simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Hongwei; High Magnetic Field Laboratory, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031; Kong Xi

    The method of quantum annealing (QA) is a promising way for solving many optimization problems in both classical and quantum information theory. The main advantage of this approach, compared with the gate model, is the robustness of the operations against errors originated from both external controls and the environment. In this work, we succeed in demonstrating experimentally an application of the method of QA to a simplified version of the traveling salesman problem by simulating the corresponding Schroedinger evolution with a NMR quantum simulator. The experimental results unambiguously yielded the optimal traveling route, in good agreement with the theoretical prediction.

  20. Albert Einstein and the Quantum Riddle

    ERIC Educational Resources Information Center

    Lande, Alfred

    1974-01-01

    Derives a systematic structure contributing to the solution of the quantum riddle in Einstein's sense by deducing quantum mechanics from the postulates of symmetry, correspondence, and covariance. Indicates that the systematic presentation is in agreement with quantum mechanics established by Schroedinger, Born, and Heisenberg. (CC)

  1. Chemical application of diffusion quantum Monte Carlo

    NASA Technical Reports Server (NTRS)

    Reynolds, P. J.; Lester, W. A., Jr.

    1984-01-01

    The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. This approach is receiving increasing attention in chemical applications as a result of its high accuracy. However, reducing statistical uncertainty remains a priority because chemical effects are often obtained as small differences of large numbers. As an example, the single-triplet splitting of the energy of the methylene molecule CH sub 2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on the VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX, are discussed. The computational time dependence obtained versus the number of basis functions is discussed and this is compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures.

  2. The backward phase flow and FBI-transform-based Eulerian Gaussian beams for the Schroedinger equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leung Shingyu, E-mail: masyleung@ust.h; Qian Jianliang, E-mail: qian@math.msu.ed

    2010-11-20

    We propose the backward phase flow method to implement the Fourier-Bros-Iagolnitzer (FBI)-transform-based Eulerian Gaussian beam method for solving the Schroedinger equation in the semi-classical regime. The idea of Eulerian Gaussian beams has been first proposed in . In this paper we aim at two crucial computational issues of the Eulerian Gaussian beam method: how to carry out long-time beam propagation and how to compute beam ingredients rapidly in phase space. By virtue of the FBI transform, we address the first issue by introducing the reinitialization strategy into the Eulerian Gaussian beam framework. Essentially we reinitialize beam propagation by applying themore » FBI transform to wavefields at intermediate time steps when the beams become too wide. To address the second issue, inspired by the original phase flow method, we propose the backward phase flow method which allows us to compute beam ingredients rapidly. Numerical examples demonstrate the efficiency and accuracy of the proposed algorithms.« less

  3. Quantum Black Hole Model and HAWKING’S Radiation

    NASA Astrophysics Data System (ADS)

    Berezin, Victor

    The black hole model with a self-gravitating charged spherical symmetric dust thin shell as a source is considered. The Schroedinger-type equation for such a model is derived. This equation appeared to be a finite differences equation. A theory of such an equation is developed and general solution is found and investigated in details. The discrete spectrum of the bound state energy levels is obtained. All the eigenvalues appeared to be infinitely degenerate. The ground state wave functions are evaluated explicitly. The quantum black hole states are selected and investigated. It is shown that the obtained black hole mass spectrum is compatible with the existence of Hawking’s radiation in the limit of low temperatures both for large and nearly extreme Reissner-Nordstrom black holes. The above mentioned infinite degeneracy of the mass (energy) eigenvalues may appeared helpful in resolving the well known information paradox in the black hole physics.

  4. Mathematical Tools for Image Reconstruction

    DTIC Science & Technology

    1991-07-01

    l.Diffuse tomography 2.Concentrating a signal in the physical and spectral domains. 3.New explicit solutions for the Kadomtsev - Petviashvili equation 4...the case of the Schroedinger equation it was possible to "beat Heisenberg" with piecewise linear potentials. Finally let me say that the paper Some

  5. Dark and grey compressional dispersive Alfven solitons in plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shukla, P. K.; Eliasson, B.; Stenflo, L.

    2011-06-15

    The amplitude modulation of compressional dispersive Alfven (CDA) waves in a low-{beta} plasma is considered. It is shown that the dynamics of modulated CDA waves is governed by a cubic nonlinear Schroedinger equation, which depicts the formation of a dark/grey envelope CDA soliton.

  6. A numerical and experimental study on the nonlinear evolution of long-crested irregular waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goullet, Arnaud; Choi, Wooyoung; Division of Ocean Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 305-701

    2011-01-15

    The spatial evolution of nonlinear long-crested irregular waves characterized by the JONSWAP spectrum is studied numerically using a nonlinear wave model based on a pseudospectral (PS) method and the modified nonlinear Schroedinger (MNLS) equation. In addition, new laboratory experiments with two different spectral bandwidths are carried out and a number of wave probe measurements are made to validate these two wave models. Strongly nonlinear wave groups are observed experimentally and their propagation and interaction are studied in detail. For the comparison with experimental measurements, the two models need to be initialized with care and the initialization procedures are described. Themore » MNLS equation is found to approximate reasonably well for the wave fields with a relatively smaller Benjamin-Feir index, but the phase error increases as the propagation distance increases. The PS model with different orders of nonlinear approximation is solved numerically, and it is shown that the fifth-order model agrees well with our measurements prior to wave breaking for both spectral bandwidths.« less

  7. A Pearson Effective Potential for Monte Carlo Simulation of Quantum Confinement Effects in nMOSFETs

    NASA Astrophysics Data System (ADS)

    Jaud, Marie-Anne; Barraud, Sylvain; Saint-Martin, Jérôme; Bournel, Arnaud; Dollfus, Philippe; Jaouen, Hervé

    2008-12-01

    A Pearson Effective Potential model for including quantization effects in the simulation of nanoscale nMOSFETs has been developed. This model, based on a realistic description of the function representing the non zero-size of the electron wave packet, has been used in a Monte-Carlo simulator for bulk, single gate SOI and double-gate SOI devices. In the case of SOI capacitors, the electron density has been computed for a large range of effective field (between 0.1 MV/cm and 1 MV/cm) and for various silicon film thicknesses (between 5 nm and 20 nm). A good agreement with the Schroedinger-Poisson results is obtained both on the total inversion charge and on the electron density profiles. The ability of an Effective Potential approach to accurately reproduce electrostatic quantum confinement effects is clearly demonstrated.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lipkin, H.J.

    Overwhelming experimental evidence for quarks as real physical constituents of hadrons along with the QCD analogs of the Balmer Formula, Bohr Atom and Schroedinger Equation already existed in 1966 but was dismissed as heresy. ZGS experiments played an important role in the quark revolution. This role is briefly reviewed and subsequent progress in quark physics is described.

  9. Dark soliton solution of Sasa-Satsuma equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohta, Y.

    2010-03-08

    The Sasa-Satsuma equation is a higher order nonlinear Schroedinger type equation which admits bright soliton solutions with internal freedom. We present the dark soliton solutions for the equation by using Gram type determinant. The dark solitons have no internal freedom and exist for both defocusing and focusing equations.

  10. Hidden algebra method (quasi-exact-solvability in quantum mechanics)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turbiner, Alexander; Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, Apartado, Postal 70-543, 04510 Mexico, D. F.

    1996-02-20

    A general introduction to quasi-exactly-solvable problems of quantum mechanics is presented. Main attention is given to multidimensional quasi-exactly-solvable and exactly-solvable Schroedinger operators. Exact-solvability of the Calogero and Sutherland N-body problems ass ociated with an existence of the hidden algebra slN is discussed extensively.

  11. The "Hard Problem" and the Quantum Physicists. Part 1: The First Generation

    ERIC Educational Resources Information Center

    Smith, C. U. M.

    2006-01-01

    All four of the most important figures in the early twentieth-century development of quantum physics--Niels Bohr, Erwin Schroedinger, Werner Heisenberg and Wolfgang Pauli--had strong interests in the traditional mind--brain, or "hard," problem. This paper reviews their approach to this problem, showing the influence of Bohr's complementarity…

  12. How Accurately Does the Free Complement Wave Function of a Helium Atom Satisfy the Schroedinger Equation?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakashima, Hiroyuki; Nakatsuji, Hiroshi

    2008-12-12

    The local energy defined by H{psi}/{psi} must be equal to the exact energy E at any coordinate of an atom or molecule, as long as the {psi} under consideration is exact. The discrepancy from E of this quantity is a stringent test of the accuracy of the calculated wave function. The H-square error for a normalized {psi}, defined by {sigma}{sup 2}{identical_to}<{psi}|(H-E){sup 2}|{psi}>, is also a severe test of the accuracy. Using these quantities, we have examined the accuracy of our wave function of a helium atom calculated using the free complement method that was developed to solve the Schroedinger equation.more » Together with the variational upper bound, the lower bound of the exact energy calculated using a modified Temple's formula ensured the definitely correct value of the helium fixed-nucleus ground state energy to be -2.903 724 377 034 119 598 311 159 245 194 4 a.u., which is correct to 32 digits.« less

  13. Schroedinger's immortal cat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peres, A.

    1988-01-01

    The purpose of this paper is to review and clarify the quantum measurement problem. The latter originates in the ambivalent nature of the observer: Although the observer is not described by the Schroedinger equation, it should nevertheless be possible to quantize him and include him in the wave function if quantum theory is universally valid. The problem is to prove that no contradiction may arise in these two conflicting descriptions. The proof invokes the notion of irreversibility. The validity of the latter is questionable, because the standard rationale for classical irreversibility, namely mixing and coarse graining, does not apply tomore » quantum theory. There is no chaos in a closed, finite quantum system. However, when a system is large enough, it cannot be perfectly isolated from it environment, namely from external (or even internal) degrees of freedom which are not fully accounted for in the Hamiltonian of that system. As a consequence, the long-range evolution of such a quantum system is essentially unpredictable. It follows that the notion of irreversibility is a valid one in quantum theory and the measurement problem can be brought to a satisfactory solution.« less

  14. Harmonic oscillator representation in the theory of scattering and nuclear reactions

    NASA Technical Reports Server (NTRS)

    Smirnov, Yuri F.; Shirokov, A. M.; Lurie, Yuri, A.; Zaitsev, S. A.

    1995-01-01

    The following questions, concerning the application of the harmonic oscillator representation (HOR) in the theory of scattering and reactions, are discussed: the formulation of the scattering theory in HOR; exact solutions of the free motion Schroedinger equation in HOR; separable expansion of the short range potentials and the calculation of the phase shifts; 'isolated states' as generalization of the Wigner-von Neumann bound states embedded in continuum; a nuclear coupled channel problem in HOR; and the description of true three body scattering in HOR. As an illustration the soft dipole mode in the (11)Li nucleus is considered in a frame of the (9)Li+n+n cluster model taking into account of three body continuum effects.

  15. Computing singularities of perturbation series

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kvaal, Simen; Jarlebring, Elias; Michiels, Wim

    2011-03-15

    Many properties of current ab initio approaches to the quantum many-body problem, both perturbational and otherwise, are related to the singularity structure of the Rayleigh-Schroedinger perturbation series. A numerical procedure is presented that in principle computes the complete set of singularities, including the dominant singularity which limits the radius of convergence. The method approximates the singularities as eigenvalues of a certain generalized eigenvalue equation which is solved using iterative techniques. It relies on computation of the action of the Hamiltonian matrix on a vector and does not rely on the terms in the perturbation series. The method can be usefulmore » for studying perturbation series of typical systems of moderate size, for fundamental development of resummation schemes, and for understanding the structure of singularities for typical systems. Some illustrative model problems are studied, including a helium-like model with {delta}-function interactions for which Moeller-Plesset perturbation theory is considered and the radius of convergence found.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanchez-Arriaga, G.; Hada, T.; Nariyuki, Y.

    The triple-degenerate derivative nonlinear Schroedinger (TDNLS) system modified with resistive wave damping and growth is truncated to study the coherent coupling of four waves, three Alfven and one acoustic, near resonance. In the conservative case, the truncation equations derive from a time independent Hamiltonian function with two degrees of freedom. Using a Poincare map analysis, two parameters regimes are explored. In the first regime we check how the modulational instability of the TDNLS system affects to the dynamics of the truncation model, while in the second one the exact triple degenerated case is discussed. In the dissipative case, the truncationmore » model gives rise to a six dimensional flow with five free parameters. Computing some bifurcation diagrams the dependence with the sound to Alfven velocity ratio as well as the Alfven modes involved in the truncation is analyzed. The system exhibits a wealth of dynamics including chaotic attractor, several kinds of bifurcations, and crises. The truncation model was compared to numerical integrations of the TDNLS system.« less

  17. Quantum description of the high-order harmonic generation in multiphoton and tunneling regimes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perez-Hernandez, J. A.; Plaja, L.

    2007-08-15

    We employ a recently developed S-matrix approach [L. Plaja and J. A. Perez-Hernandez, Opt. Express 15, 3629 (2007)] to investigate the process of harmonic generation in tunnel and multiphoton ionization regimes. In contrast with most of the previous approaches, this model is developed without the stationary phase approximation and including the relevant continuum-continuum transitions. Therefore, it provides a full quantum description of the harmonic generation process in these two ionization regimes, with a good quantitative accuracy with the exact results of the time-dependent Schroedinger equation. We show how this model can be used to investigate the contribution of the electronicmore » population ionized at different times, thus giving a time-resolved description that, up to now, was reserved only to semiclassical models. In addition, we will show some aspects of harmonic generation beyond the semiclassical predictions as, for instance, the emission of radiation while the electron is leaving the parent ion and the generation of harmonics in semiclassically forbidden situations.« less

  18. Disappearing Q operator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, H. F.; Rivers, R. J.

    In the Schroedinger formulation of non-Hermitian quantum theories a positive-definite metric operator {eta}{identical_to}e{sup -Q} must be introduced in order to ensure their probabilistic interpretation. This operator also gives an equivalent Hermitian theory, by means of a similarity transformation. If, however, quantum mechanics is formulated in terms of functional integrals, we show that the Q operator makes only a subliminal appearance and is not needed for the calculation of expectation values. Instead, the relation to the Hermitian theory is encoded via the external source j(t). These points are illustrated and amplified for two non-Hermitian quantum theories: the Swanson model, a non-Hermitianmore » transform of the simple harmonic oscillator, and the wrong-sign quartic oscillator, which has been shown to be equivalent to a conventional asymmetric quartic oscillator.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flego, S.P.; Plastino, A.; Universitat de les Illes Balears and IFISC-CSIC, 07122 Palma de Mallorca

    We explore intriguing links connecting Hellmann-Feynman's theorem to a thermodynamics information-optimizing principle based on Fisher's information measure. - Highlights: > We link a purely quantum mechanical result, the Hellmann-Feynman theorem, with Jaynes' information theoretical reciprocity relations. > These relations involve the coefficients of a series expansion of the potential function. > We suggest the existence of a Legendre transform structure behind Schroedinger's equation, akin to the one characterizing thermodynamics.

  20. Effective equations for the quantum pendulum from momentous quantum mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez, Hector H.; Chacon-Acosta, Guillermo; Departamento de Matematicas Aplicadas y Sistemas, Universidad Autonoma Metropolitana-Cuajimalpa, Artificios 40, Mexico D. F. 01120

    In this work we study the quantum pendulum within the framework of momentous quantum mechanics. This description replaces the Schroedinger equation for the quantum evolution of the system with an infinite set of classical equations for expectation values of configuration variables, and quantum dispersions. We solve numerically the effective equations up to the second order, and describe its evolution.

  1. Quantum leaps in philosophy of mind: Reply to Bourget'scritique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stapp, Henry P.

    2004-07-26

    David Bourget has raised some conceptual and technical objections to my development of von Neumann's treatment of the Copenhagen idea that the purely physical process described by the Schroedinger equation must be supplemented by a psychophysical process called the choice of the experiment by Bohr and Process 1 by von Neumann. I answer here each of Bourget's objections.

  2. Hidden algebra method (quasi-exact-solvability in quantum mechanics)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turbiner, A.

    1996-02-01

    A general introduction to quasi-exactly-solvable problems of quantum mechanics is presented. Main attention is given to multidimensional quasi-exactly-solvable and exactly-solvable Schroedinger operators. Exact-solvability of the Calogero and Sutherland {ital N}-body problems ass ociated with an existence of the hidden algebra {ital sl}{sub {ital N}} is discussed extensively. {copyright} {ital 1996 American Institute of Physics.}

  3. Quantum Entanglement in Optical Lattice Systems

    DTIC Science & Technology

    2015-02-18

    Zitterbewegung oscillation was first predicted by Schroedinger in 1930 for relativistic Dirac electrons where it arises from the interference...magnetic gradient. The gradient affected the Rabi cycling rate, leading to a phase winding along the long axis of the cigar -shaped BEC. While the single...approach is applicable to spherically symmetric, strictly two- dimensional, strictly one-dimensional, cigar -shaped, and pancake-shaped traps and has

  4. Uncertainty relations, zero point energy and the linear canonical group

    NASA Technical Reports Server (NTRS)

    Sudarshan, E. C. G.

    1993-01-01

    The close relationship between the zero point energy, the uncertainty relations, coherent states, squeezed states, and correlated states for one mode is investigated. This group-theoretic perspective enables the parametrization and identification of their multimode generalization. In particular the generalized Schroedinger-Robertson uncertainty relations are analyzed. An elementary method of determining the canonical structure of the generalized correlated states is presented.

  5. The quantum universe

    NASA Astrophysics Data System (ADS)

    Hey, Anthony J. G.; Walters, Patrick

    This book provides a descriptive, popular account of quantum physics. The basic topics addressed include: waves and particles, the Heisenberg uncertainty principle, the Schroedinger equation and matter waves, atoms and nuclei, quantum tunneling, the Pauli exclusion principle and the elements, quantum cooperation and superfluids, Feynman rules, weak photons, quarks, and gluons. The applications of quantum physics to astrophyics, nuclear technology, and modern electronics are addressed.

  6. Position dependent mass Schroedinger equation and isospectral potentials: Intertwining operator approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Midya, Bikashkali; Roy, B.; Roychoudhury, R.

    2010-02-15

    Here, we have studied first- and second-order intertwining approaches to generate isospectral partner potentials of position dependent (effective) mass Schroedinger equation. The second-order intertwiner is constructed directly by taking it as second-order linear differential operator with position dependent coefficients, and the system of equations arising from the intertwining relationship is solved for the coefficients by taking an ansatz. A complete scheme for obtaining general solution is obtained, which is valid for any arbitrary potential and mass function. The proposed technique allows us to generate isospectral potentials with the following spectral modifications: (i) to add new bound state(s), (ii) to removemore » bound state(s), and (iii) to leave the spectrum unaffected. To explain our findings with the help of an illustration, we have used point canonical transformation to obtain the general solution of the position dependent mass Schrodinger equation corresponding to a potential and mass function. It is shown that our results are consistent with the formulation of type A N-fold supersymmetry [T. Tanaka, J. Phys. A 39, 219 (2006); A. Gonzalez-Lopez and T. Tanaka, J. Phys. A 39, 3715 (2006)] for the particular cases N=1 and N=2, respectively.« less

  7. Practical recursive solution of degenerate Rayleigh-Schroedinger perturbation theory and application to high-order calculations of the Zeeman effect in hydrogen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silverstone, H.J.; Moats, R.K.

    1981-04-01

    With the aim of high-order calculations, a new recursive solution for the degenerate Rayleigh-Schroedinger perturbation-theory wave function and energy has been derived. The final formulas, chi/sup (N/)/sub sigma/ = R/sup () -sigma/summation/sup N/-1/sub k/ = 0 H/sup (sigma+1+k/)/sub sigma+1/chi/sup (N/-1-k), E/sup (N/+sigma) = <0Vertical BarH/sup (N/+sigma)/sub sigma+1/Vertical Bar0> + < 0Vertical Barsummation/sup N/-2/sub k/ = 0H/sup (sigma+1+k/)/sub sigma+1/ Vertical Barchi/sup (N/-1-k)>,which involve new Hamiltonian-related operators H/sup (sigma+k/)/sub sigma/ and H/sup( sigma+k/)/sub sigma/, strongly resemble the standard nondegenerate recursive formulas. As an illustration, the perturbed energy coefficients for the 3s-3d/sub 0/ states of hydrogen in the Zeeman effect have been calculatedmore » recursively through 87th order in the square of the magnetic field. Our treatment is compared with that of Hirschfelder and Certain (J. Chem. Phys. 60, 1118 (1974)), and some relative advantages of each are pointed out.« less

  8. Two atoms in an anisotropic harmonic trap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Idziaszek, Z.; Centrum Fizyki Teoretycznej, Polska Akademia Nauk, 02-668 Warsaw; Calarco, T.

    2005-05-15

    We consider the system of two interacting atoms confined in axially symmetric harmonic trap. Within the pseudopotential approximation, we solve the Schroedinger equation exactly, discussing the limits of quasi-one-and quasi-two-dimensional geometries. Finally, we discuss the application of an energy-dependent pseudopotential, which allows us to extend the validity of our results to the case of tight traps and large scattering lengths.

  9. Introduction to the Contributions of A. Temkin and R. J. Drachman to Atomic Physics

    NASA Technical Reports Server (NTRS)

    Bhatia, A.K.

    2007-01-01

    Their work, as is the work of most atomic theorists, is concerned with solving the Schroedinger equation accurately for wave function in cases where there is no exact analytical solution. In particular, Temkin is associated with electron scattering from atoms and ions. When he started there already were a number of methods to study the scattering of electrons from atoms.

  10. Analysis of energy states in modulation doped multiquantum well heterostructures

    NASA Technical Reports Server (NTRS)

    Ji, G.; Henderson, T.; Peng, C. K.; Huang, D.; Morkoc, H.

    1990-01-01

    A precise and effective numerical procedure to model the band diagram of modulation doped multiquantum well heterostructures is presented. This method is based on a self-consistent iterative solution of the Schroedinger equation and the Poisson equation. It can be used rather easily in any arbitrary modulation-doped structure. In addition to confined energy subbands, the unconfined states can be calculated as well. Examples on realistic device structures are given to demonstrate capabilities of this procedure. The numerical results are in good agreement with experiments. With the aid of this method the transitions involving both the confined and unconfined conduction subbands in a modulation doped AlGaAs/GaAs superlattice, and in a strained layer InGaAs/GaAs superlattice are identified. These results represent the first observation of unconfined transitions in modulation doped multiquantum well structures.

  11. Accuracy of analytic energy level formulas applied to hadronic spectroscopy of heavy mesons

    NASA Technical Reports Server (NTRS)

    Badavi, Forooz F.; Norbury, John W.; Wilson, John W.; Townsend, Lawrence W.

    1988-01-01

    Linear and harmonic potential models are used in the nonrelativistic Schroedinger equation to obtain article mass spectra for mesons as bound states of quarks. The main emphasis is on the linear potential where exact solutions of the S-state eigenvalues and eigenfunctions and the asymptotic solution for the higher order partial wave are obtained. A study of the accuracy of two analytical energy level formulas as applied to heavy mesons is also included. Cornwall's formula is found to be particularly accurate and useful as a predictor of heavy quarkonium states. Exact solution for all partial waves of eigenvalues and eigenfunctions for a harmonic potential is also obtained and compared with the calculated discrete spectra of the linear potential. Detailed derivations of the eigenvalues and eigenfunctions of the linear and harmonic potentials are presented in appendixes.

  12. Temporal analysis of nonresonant two-photon coherent control involving bound and dissociative molecular states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su Jing; Chen Shaohao; Jaron-Becker, Agnieszka

    We theoretically study the control of two-photon excitation to bound and dissociative states in a molecule induced by trains of laser pulses, which are equivalent to certain sets of spectral phase modulated pulses. To this end, we solve the time-dependent Schroedinger equation for the interaction of molecular model systems with an external intense laser field. Our numerical results for the temporal evolution of the population in the excited states show that, in the case of an excited dissociative state, control schemes, previously validated for the atomic case, fail due to the coupling of electronic and nuclear motion. In contrast, formore » excitation to bound states the two-photon excitation probability is controlled via the time delay and the carrier-envelope phase difference between two consecutive pulses in the train.« less

  13. A quantum mechanical model for the relationship between stock price and stock ownership

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cotfas, Liviu-Adrian

    2012-11-01

    The trade of a fixed stock can be regarded as the basic process that measures its momentary price. The stock price is exactly known only at the time of sale when the stock is between traders, that is, only in the case when the owner is unknown. We show that the stock price can be better described by a function indicating at any moment of time the probabilities for the possible values of price if a transaction takes place. This more general description contains partial information on the stock price, but it also contains partial information on the stock owner.more » By following the analogy with quantum mechanics, we assume that the time evolution of the function describing the stock price can be described by a Schroedinger type equation.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nekrasov, Nikita; ITEP, Moscow; Shatashvili, Samson

    Supersymmetric vacua of two dimensional N = 4 gauge theories with matter, softly broken by the twisted masses down to N = 2, are shown to be in one-to-one correspondence with the eigenstates of integrable spin chain Hamiltonians. Examples include: the Heisenberg SU(2)XXX spin chain which is mapped to the two dimensional U(N) theory with fundamental hypermultiplets, the XXZ spin chain which is mapped to the analogous three dimensional super-Yang-Mills theory compactified on a circle, the XYZ spin chain and eight-vertex model which are related to the four dimensional theory compactified on T{sup 2}. A consequence of our correspondence ismore » the isomorphism of the quantum cohomology ring of various quiver varieties, such as cotangent bundles to (partial) flag varieties and the ring of quantum integrals of motion of various spin chains. The correspondence extends to any spin group, representations, boundary conditions, and inhomogeneity, it includes Sinh-Gordon and non-linear Schroedinger models as well as the dynamical spin chains like Hubbard model. Compactifications of four dimensional N = 2 theories on a two-sphere lead to the instanton-corrected Bethe equations.« less

  15. Theoretical and material studies on thin-film electroluminescent devices

    NASA Technical Reports Server (NTRS)

    Summers, C. J.; Brennan, K. F.

    1986-01-01

    A theoretical study of resonant tunneling in multilayered heterostructures is presented based on an exact solution of the Schroedinger equation under the application of a constant electric field. By use of the transfer matrix approach, the transmissivity of the structure is determined as a function of the incident electron energy. The approach presented is easily extended to many layer structures where it is more accurate than other existing transfer matrix or WKB models. The transmission resonances are compared to the bound state energies calculated for a finite square well under bias using either an asymmetric square well model or the exact solution of an infinite square well under the application of an electric field. The results show good agreement with other existing models as well as with the bound state energies. The calculations were then applied to a new superlattice structure, the variablly spaced superlattice energy filter, (VSSEP) which is designed such that under bias the spatial quantization levels fully align. Based on these calculations, a new class of resonant tunneling superlattice devices can be designed.

  16. Criticality of the electron-nucleus cusp condition to local effective potential-energy theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan Xiaoyin; Sahni, Viraht; Graduate School of the City University of New York, 360 Fifth Avenue, New York, New York 10016

    2003-01-01

    Local(multiplicative) effective potential energy-theories of electronic structure comprise the transformation of the Schroedinger equation for interacting Fermi systems to model noninteracting Fermi or Bose systems whereby the equivalent density and energy are obtained. By employing the integrated form of the Kato electron-nucleus cusp condition, we prove that the effective electron-interaction potential energy of these model fermions or bosons is finite at a nucleus. The proof is general and valid for arbitrary system whether it be atomic, molecular, or solid state, and for arbitrary state and symmetry. This then provides justification for all prior work in the literature based on themore » assumption of finiteness of this potential energy at a nucleus. We further demonstrate the criticality of the electron-nucleus cusp condition to such theories by an example of the hydrogen molecule. We show thereby that both model system effective electron-interaction potential energies, as determined from densities derived from accurate wave functions, will be singular at the nucleus unless the wave function satisfies the electron-nucleus cusp condition.« less

  17. On the Development of a New Nonequilibrium Chemistry Model for Mars Entry

    NASA Technical Reports Server (NTRS)

    Jaffe, R. L.; Schwenke, D. W.; Chaban, G. M.; Prabhu, D. K.; Johnston, C. O.; Panesi, M.

    2017-01-01

    This paper represents a summary of results to date of an on-going effort at NASA Ames Research Center to develop a physics-based non-equilibrium model for hypersonic entry into the Martian atmosphere. Our approach is to first compute potential energy surfaces based on accurate solutions of the electronic Schroedinger equation and then use quasiclassical trajectory calculations to obtain reaction cross sections and rate coefficients based on these potentials. We have presented new rate coefficients for N2 dissociation and CO dissociation and exchange reactions. These results illustrate shortcomings with some of the rate coefficients in Parks original T-Tv model for Mars entries and with some of the 30-45 year old shock tube data. We observe that the shock tube experiments of CO + O dissociation did not adequately account for the exchange reaction that leads to formation of C + O2. This reaction is actually the primary channel for CO removal in the shock layer at temperatures below 10,000 K, because the reaction enthalpy for exchange is considerably lower than the comparable value for dissociation.

  18. Similarity solutions of some two-space-dimensional nonlinear wave evolution equations

    NASA Technical Reports Server (NTRS)

    Redekopp, L. G.

    1980-01-01

    Similarity reductions of the two-space-dimensional versions of the Korteweg-de Vries, modified Korteweg-de Vries, Benjamin-Davis-Ono, and nonlinear Schroedinger equations are presented, and some solutions of the reduced equations are discussed. Exact dispersive solutions of the two-dimensional Korteweg-de Vries equation are obtained, and the similarity solution of this equation is shown to be reducible to the second Painleve transcendent.

  19. Single-Particle Quantum Dynamics in a Magnetic Lattice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venturini, Marco

    2001-02-01

    We study the quantum dynamics of a spinless charged-particle propagating through a magnetic lattice in a transport line or storage ring. Starting from the Klein-Gordon equation and by applying the paraxial approximation, we derive a Schroedinger-like equation for the betatron motion. A suitable unitary transformation reduces the problem to that of a simple harmonic oscillator. As a result we are able to find an explicit expression for the particle wavefunction.

  20. Self-modulational formation of pulsar microstructures

    NASA Technical Reports Server (NTRS)

    Kennel, C. F.; Chian, A. C.-L.

    1987-01-01

    A nonlinear plasma theory for self modulation of pulsar radio pulses is discussed. A nonlinear Schroedinger equation is derived for strong electromagnetic waves propagating in an electron positron plasma. The nonlinearities arising from wave intensity induced particle mass variation may excite the modulational instability of circularly and linearly polarized pulsar radiation. The resulting wave envelopes can take the form of periodic wave trains or solitons. These nonlinear stationary waveforms may account for the formation of pulsar microstructures.

  1. Gamma Oscillations and Visual Binding

    NASA Astrophysics Data System (ADS)

    Robinson, Peter A.; Kim, Jong Won

    2006-03-01

    At the root of visual perception is the mechanism the brain uses to analyze features in a scene and bind related ones together. Experiments show this process is linked to oscillations of brain activity in the 30-100 Hz gamma band. Oscillations at different sites have correlation functions (CFs) that often peak at zero lag, implying simultaneous firing, even when conduction delays are large. CFs are strongest between cells stimulated by related features. Gamma oscillations are studied here by modeling mm-scale patchy interconnections in the visual cortex. Resulting predictions for gamma responses to stimuli account for numerous experimental findings, including why oscillations and zero-lag synchrony are associated, observed connections with feature preferences, the shape of the zero-lag peak, and variations of CFs with attention. Gamma waves are found to obey the Schroedinger equation, opening the possibility of cortical analogs of quantum phenomena. Gamma instabilities are tied to observations of gamma activity linked to seizures and hallucinations.

  2. Accurate complex scaling of three dimensional numerical potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cerioni, Alessandro; Genovese, Luigi; Duchemin, Ivan

    2013-05-28

    The complex scaling method, which consists in continuing spatial coordinates into the complex plane, is a well-established method that allows to compute resonant eigenfunctions of the time-independent Schroedinger operator. Whenever it is desirable to apply the complex scaling to investigate resonances in physical systems defined on numerical discrete grids, the most direct approach relies on the application of a similarity transformation to the original, unscaled Hamiltonian. We show that such an approach can be conveniently implemented in the Daubechies wavelet basis set, featuring a very promising level of generality, high accuracy, and no need for artificial convergence parameters. Complex scalingmore » of three dimensional numerical potentials can be efficiently and accurately performed. By carrying out an illustrative resonant state computation in the case of a one-dimensional model potential, we then show that our wavelet-based approach may disclose new exciting opportunities in the field of computational non-Hermitian quantum mechanics.« less

  3. Strong-field ionization of H{sub 2} from ultraviolet to near-infrared wavelengths: Photoelectron energy and angular identifications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilbois, Timo; Helm, Hanspeter

    2011-11-15

    Strong-field ionization of molecular hydrogen is studied at wavelengths ranging from 300 to 800 nm using pulses of 100-fs duration. We find that over this wide wavelength range, from nominally 4-photon to 11-photon ionization, resonance features dominate the ionization probability at intensities below 10{sup 14} W/cm{sup 2}. Photoelectron momentum maps recorded by an imaging spectrometer are analyzed to identify the wavelength-dependent ionization pathways in single ionization of molecular hydrogen. A number of models, some empirical, which are appropriate for a quantitative interpretation of the spectra and the ionization yield are introduced. A near-absolute comparison of measured ionization yields at 398more » nm is made with the predictions based on a numerical solution [Y. V. Vanne and A. Saenz, Phys. Rev. A 79, 023421 (2009)] of the time-dependent Schroedinger equation for two correlated electrons.« less

  4. EPR: Some History and Clarification

    NASA Astrophysics Data System (ADS)

    Fine, Arthur

    2002-04-01

    Locality, separation and entanglement 1930s style. We’ll explore the background to the 1935 paper by Einstein, Podolsky and Rosen, how it was composed, the actual argument of the paper, the principles used, and how the paper was received by Schroedinger, and others.We’ll also look at Bohr’s response: the extent to which Bohr connects with what Einstein was after in EPR and the extent to EPR marks a shift in Bohr’s thinking about the quantum theory.

  5. The Schwinger Variational Method

    NASA Technical Reports Server (NTRS)

    Huo, Winifred M.

    1995-01-01

    Variational methods have proven invaluable in theoretical physics and chemistry, both for bound state problems and for the study of collision phenomena. For collisional problems they can be grouped into two types: those based on the Schroedinger equation and those based on the Lippmann-Schwinger equation. The application of the Schwinger variational (SV) method to e-molecule collisions and photoionization has been reviewed previously. The present chapter discusses the implementation of the SV method as applied to e-molecule collisions.

  6. Quantum mechanics from an equivalence principle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faraggi, A.E.; Matone, M.

    1997-05-15

    The authors show that requiring diffeomorphic equivalence for one-dimensional stationary states implies that the reduced action S{sub 0} satisfies the quantum Hamilton-Jacobi equation with the Planck constant playing the role of a covariantizing parameter. The construction shows the existence of a fundamental initial condition which is strictly related to the Moebius symmetry of the Legendre transform and to its involutive character. The universal nature of the initial condition implies the Schroedinger equation in any dimension.

  7. Review of the inverse scattering problem at fixed energy in quantum mechanics

    NASA Technical Reports Server (NTRS)

    Sabatier, P. C.

    1972-01-01

    Methods of solution of the inverse scattering problem at fixed energy in quantum mechanics are presented. Scattering experiments of a beam of particles at a nonrelativisitic energy by a target made up of particles are analyzed. The Schroedinger equation is used to develop the quantum mechanical description of the system and one of several functions depending on the relative distance of the particles. The inverse problem is the construction of the potentials from experimental measurements.

  8. Amplification of nonlinear surface waves by wind

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leblanc, Stephane

    2007-10-15

    A weakly nonlinear analysis is conducted to study the evolution of slowly varying wavepackets with small but finite amplitudes, that evolve at the interface between air and water under the effect of wind. In the inviscid assumption, wave envelopes are governed by cubic nonlinear Schroedinger or Davey-Stewartson equations forced by a linear term corresponding to Miles' mechanism of wave generation. Under fair wind, it is shown that Stokes waves grow exponentially and that Benjamin-Feir instability becomes explosive.

  9. Quantum Sets and Clifford Algebras

    NASA Astrophysics Data System (ADS)

    Finkelstein, David

    1982-06-01

    The mathematical language presently used for quantum physics is a high-level language. As a lowest-level or basic language I construct a quantum set theory in three stages: (1) Classical set theory, formulated as a Clifford algebra of “ S numbers” generated by a single monadic operation, “bracing,” Br = {…}. (2) Indefinite set theory, a modification of set theory dealing with the modal logical concept of possibility. (3) Quantum set theory. The quantum set is constructed from the null set by the familiar quantum techniques of tensor product and antisymmetrization. There are both a Clifford and a Grassmann algebra with sets as basis elements. Rank and cardinality operators are analogous to Schroedinger coordinates of the theory, in that they are multiplication or “ Q-type” operators. “ P-type” operators analogous to Schroedinger momenta, in that they transform the Q-type quantities, are bracing (Br), Clifford multiplication by a set X, and the creator of X, represented by Grassmann multiplication c( X) by the set X. Br and its adjoint Br* form a Bose-Einstein canonical pair, and c( X) and its adjoint c( X)* form a Fermi-Dirac or anticanonical pair. Many coefficient number systems can be employed in this quantization. I use the integers for a discrete quantum theory, with the usual complex quantum theory as limit. Quantum set theory may be applied to a quantum time space and a quantum automaton.

  10. Internally electrodynamic particle model: Its experimental basis and its predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng-Johansson, J. X., E-mail: jxzj@iofpr.or

    2010-03-15

    The internally electrodynamic (IED) particle model was derived based on overall experimental observations, with the IED process itself being built directly on three experimental facts: (a) electric charges present with all material particles, (b) an accelerated charge generates electromagnetic waves according to Maxwell's equations and Planck energy equation, and (c) source motion produces Doppler effect. A set of well-known basic particle equations and properties become predictable based on first principles solutions for the IED process; several key solutions achieved are outlined, including the de Broglie phase wave, de Broglie relations, Schroedinger equation, mass, Einstein mass-energy relation, Newton's law of gravity,more » single particle self interference, and electromagnetic radiation and absorption; these equations and properties have long been broadly experimentally validated or demonstrated. A conditioned solution also predicts the Doebner-Goldin equation which emerges to represent a form of long-sought quantum wave equation including gravity. A critical review of the key experiments is given which suggests that the IED process underlies the basic particle equations and properties not just sufficiently but also necessarily.« less

  11. Experimental and Coupled-channels Investigation of the Radiative Properties of the N2 c4 (sup 1)Sigma+(sub u) - X (sup 1)Sigma+(sub g) Band System

    NASA Technical Reports Server (NTRS)

    Liu, Xianming; Shemansky, Donald E.; Malone, Charles P.; Johnson, Paul V.; Ajello, Joseph M.; Kanik, Isik; Heays, Alan N.; Lewis, Brenton R.; Gibson, Stephen T.; Stark, Glenn

    2008-01-01

    The emission properties of the N2 c(sup prime)(sub 4) (sup 1)Sigma+(sub u) - Chi (sup 1)Sigma+(sub g) band system have been investigated in a joint experimental and coupled-channels theoretical study. Relative intensities of the c(sup prime)(sub 4) (sup 1)Sigma+(sub u)(0) - Chi (sup 1)Sigma+(sub g)(v(sub i)) transitions, measured via electron-impact-induced emission spectroscopy, are combined with a coupled-channel Schroedinger equation (CSE) model of the N2 molecule, enabling determination of the diabatic electronic transition moment for the c(sup prime)(sub 4) (sup 1)Sigma+(sub u) - Chi (sup 1)Sigma+(sub g) system as a function of internuclear distance. The CSE probabilities are further verified by comparison with a high-resolution experimental spectrum. Spontaneous transition probabilities of the c(sup prime)(sub 4) (sup 1)Sigma+(sub u) - Chi (sup 1)Sigma+(sub g) modeling atmospheric emission, can now be calculated reliably.

  12. Copenhagen's single system premise prevents a unified view of integer and fractional quantum hall effect

    NASA Astrophysics Data System (ADS)

    Post, Evert Jan

    1999-05-01

    This essay presents conclusive evidence of the impermissibility of Copenhagen's single system interpretation of the Schroedinger process. The latter needs to be viewed as a tool exclusively describing phase and orientation randomized ensembles and is not be used for isolated single systems. Asymptotic closeness of single system and ensemble behavior and the rare nature of true single system manifestations have prevented a definitive identification of this Copenhagen deficiency over the past three quarter century. Quantum uncertainty so becomes a basic trade mark of phase and orientation disordered ensembles. The ensuing void of usable single system tools opens a new inquiry for tools without statistical connotations. Three, in part already known, period integrals here identified as flux, charge and action counters emerge as diffeo-4 invariant tools fully compatible with the demands of the general theory of relativity. The discovery of the quantum Hall effect has been instrumental in forcing a distinction between ensemble disorder as in the normal Hall effect versus ensemble order in the plateau states. Since the order of the latter permits a view of the plateau states as a macro- or meso-scopic single system, the period integral description applies, yielding a straightforward unified description of integer and fractional quantum Hall effects.

  13. Extremely Fast Numerical Integration of Ocean Surface Wave Dynamics: Building Blocks for a Higher Order Method

    DTIC Science & Technology

    2006-09-30

    equation known as the Kadomtsev - Petviashvili (KP) equation ): (ηt + coηx +αηηx + βη )x +γηyy = 0 (4) where γ = co / 2 . The KdV equation ...using the spectral formulation of the Kadomtsev - Petviashvili equation , a standard equation for nonlinear, shallow water wave dynamics that is a... Petviashvili and nonlinear Schroedinger equations and higher order corrections have been developed as prerequisites to coding the Boussinesq and Euler

  14. Extremely Fast Numerical Integration of Ocean Surface Wave Dynamics

    DTIC Science & Technology

    2007-09-30

    sub-processor must be added as shown in the blue box of Fig. 1. We first consider the Kadomtsev - Petviashvili (KP) equation ηt + coηx +αηηx + βη ...analytic integration of the so-called “soliton equations ,” I have discovered how the GFT can be used to solved higher order equations for which study...analytical study and extremely fast numerical integration of the extended nonlinear Schroedinger equation for fully three dimensional wave motion

  15. S-matrix method for the numerical determination of bound states.

    NASA Technical Reports Server (NTRS)

    Bhatia, A. K.; Madan, R. N.

    1973-01-01

    A rapid numerical technique for the determination of bound states of a partial-wave-projected Schroedinger equation is presented. First, one needs to integrate the equation only outwards as in the scattering case, and second, the number of trials necessary to determine the eigenenergy and the corresponding eigenfunction is considerably less than in the usual method. As a nontrivial example of the technique, bound states are calculated in the exchange approximation for the e-/He+ system and l equals 1 partial wave.

  16. Vortex Nucleation in a Dissipative Variant of the Nonlinear Schroedinger Equation Under Rotation

    DTIC Science & Technology

    2014-12-01

    dark solitons ) and vortices. Early soliton experiments of about 15 years ago observed the motion of a dark soliton towards the edge of the trap [27...of dark soliton oscillations in a unitary Fermi gas [30]. A number of the- oretical studies have provided relevant explanation for this phenomenology...in atomic BECs [31, 32, 33, 34, 35, 36, 37, 38, 39]. In particular, it has been identified in these works that the dark soliton follows an anti

  17. Quantum Algorithms for Computational Physics: Volume 3 of Lattice Gas Dynamics

    DTIC Science & Technology

    2007-01-03

    time- dependent state |q(t)〉 of a two- energy level quantum mechanical system, which is a fermionic qubit and is governed by the Schroedinger wave...on-site ket of size 2B |Ψ〉 total system ket of size 2Q 2.2 The quantum state in the number representation From the previous section, a time- dependent ...duration depend on the particular experimental realization, so that the natural coupling along with the program of externally applied pulses together

  18. Multicharmed Baryon Production in High Energy Nuclear Collisions

    NASA Astrophysics Data System (ADS)

    Zhao, Jiaxing; Zhuang, Pengfei

    2017-03-01

    We study nuclear medium effect on multicharmed baryon production in relativistic heavy ion collisions. By solving the three-quark Schroedinger equation at finite temperature, we calculate the wave functions and Wigner functions for doubly and triply charmed baryons Ξ_{cc} and Ω_{ccc}. Their production in nuclear collisions is largely enhanced due to the combination of uncorrelated charm quarks in the quark-gluon plasma. It is most probable to discover these new particles in heavy ion collisions at the RHIC and LHC energies.

  19. Numerical calculation of nonlinear ultrashort laser pulse propagation in transparent Kerr media

    NASA Astrophysics Data System (ADS)

    Arnold, Cord L.; Heisterkamp, Alexander; Ertmer, Wolfgang; Lubatschowski, Holger

    2005-03-01

    In the focal region of tightly focused ultrashort laser pulses, sufficient high intensities to initialize nonlinear ionization processes are easily achieved. Due to these nonlinear ionization processes, mainly multiphoton ionization and cascade ionization, free electrons are generated in the focus resulting in optical breakdown. A model including both nonlinear pulse propagation and plasma generation is used to calculate numerically the interaction of ultrashort pulses with their self-induced plasma in the vicinity of the focus. The model is based on a (3+1)-dimensional nonlinear Schroedinger equation describing the pulse propagation coupled to a system of rate equations covering the generation of free electrons. It is applicable to any transparent Kerr medium, whose linear and nonlinear optical parameters are known. Numerical calculations based on this model are used to understand nonlinear side effects, such as streak formation, occurring in addition to optical breakdown during short pulse refractive eye surgeries like fs-LASIK. Since the optical parameters of water are a good first-order approximation to those of corneal tissue, water is used as model substance. The free electron density distribution induced by focused ultrashort pulses as well as the pulses spatio-temporal behavior are studied in the low-power regime around the critical power for self-focusing.

  20. A new fourth-order Fourier-Bessel split-step method for the extended nonlinear Schroedinger equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nash, Patrick L.

    2008-01-10

    Fourier split-step techniques are often used to compute soliton-like numerical solutions of the nonlinear Schroedinger equation. Here, a new fourth-order implementation of the Fourier split-step algorithm is described for problems possessing azimuthal symmetry in 3 + 1-dimensions. This implementation is based, in part, on a finite difference approximation {delta}{sub perpendicular} {sup FDA} of 1/r ({partial_derivative})/({partial_derivative}r) r({partial_derivative})/({partial_derivative}r) that possesses an associated exact unitary representation of e{sup i/2{lambda}}{sup {delta}{sub perpendicular}{sup FDA}}. The matrix elements of this unitary matrix are given by special functions known as the associated Bessel functions. Hence the attribute Fourier-Bessel for the method. The Fourier-Bessel algorithm is shown tomore » be unitary and unconditionally stable. The Fourier-Bessel algorithm is employed to simulate the propagation of a periodic series of short laser pulses through a nonlinear medium. This numerical simulation calculates waveform intensity profiles in a sequence of planes that are transverse to the general propagation direction, and labeled by the cylindrical coordinate z. These profiles exhibit a series of isolated pulses that are offset from the time origin by characteristic times, and provide evidence for a physical effect that may be loosely termed normal mode condensation. Normal mode condensation is consistent with experimentally observed pulse filamentation into a packet of short bursts, which may occur as a result of short, intense irradiation of a medium.« less

  1. Semiclassical approximations in the coherent-state representation

    NASA Technical Reports Server (NTRS)

    Kurchan, J.; Leboeuf, P.; Saraceno, M.

    1989-01-01

    The semiclassical limit of the stationary Schroedinger equation in the coherent-state representation is analyzed simultaneously for the groups W1, SU(2), and SU(1,1). A simple expression for the first two orders for the wave function and the associated semiclassical quantization rule is obtained if a definite choice for the classical Hamiltonian and expansion parameter is made. The behavior of the modulus of the wave function, which is a distribution function in a curved phase space, is studied for the three groups. The results are applied to the quantum triaxial rotor.

  2. Einstein and the Quantum: The Secret Life of EPR

    NASA Astrophysics Data System (ADS)

    Fine, Arthur

    2006-05-01

    Locality, separation and entanglement -- 1930s style. Starting with Solvay 1927, we'll explore the background to the 1935 paper by Einstein, Podolsky and Rosen: how it was composed, the actual argument and principles used, and how the paper was received by Schroedinger, and others. We'll also look at Bohr's response: the extent to which Bohr connects with what Einstein was after in EPR and the extent to which EPR marks a shift in Bohr's thinking about the quantum theory. Time permitting, we will contrast EPR with Bell's theorem.

  3. Relativistic extension of the Kay-Moses method for constructing transparent potentials in quantum mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toyama, F.M.; Nogami, Y.; Zhao, Z.

    1993-02-01

    For the Dirac equation in one space dimension with a potential of the Lorentz scalar type, we present a complete solution for the problem of constructing a transparent potential. This is a relativistic extension of the Kay-Moses method which was developed for the nonrelativistic Schroedinger equation. There is an infinite family of transparent potentials. The potentials are all related to solutions of a class of coupled, nonlinear Dirac equations. In addition, it is argued that an admixture of a Lorentz vector component in the potential impairs perfect transparency.

  4. Uncertainty relation for non-Hamiltonian quantum systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tarasov, Vasily E.

    2013-01-15

    General forms of uncertainty relations for quantum observables of non-Hamiltonian quantum systems are considered. Special cases of uncertainty relations are discussed. The uncertainty relations for non-Hamiltonian quantum systems are considered in the Schroedinger-Robertson form since it allows us to take into account Lie-Jordan algebra of quantum observables. In uncertainty relations, the time dependence of quantum observables and the properties of this dependence are discussed. We take into account that a time evolution of observables of a non-Hamiltonian quantum system is not an endomorphism with respect to Lie, Jordan, and associative multiplications.

  5. The divine clockwork: Bohr's correspondence principle and Nelson's stochastic mechanics for the atomic elliptic state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durran, Richard; Neate, Andrew; Truman, Aubrey

    2008-03-15

    We consider the Bohr correspondence limit of the Schroedinger wave function for an atomic elliptic state. We analyze this limit in the context of Nelson's stochastic mechanics, exposing an underlying deterministic dynamical system in which trajectories converge to Keplerian motion on an ellipse. This solves the long standing problem of obtaining Kepler's laws of planetary motion in a quantum mechanical setting. In this quantum mechanical setting, local mild instabilities occur in the Keplerian orbit for eccentricities greater than (1/{radical}(2)) which do not occur classically.

  6. Ramsey's method of separated oscillating fields and its application to gravitationally induced quantum phase shifts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abele, H.; Jenke, T.; Leeb, H.

    2010-03-15

    We propose to apply Ramsey's method of separated oscillating fields to the spectroscopy of the quantum states in the gravity potential above a horizontal mirror. This method allows a precise measurement of quantum mechanical phaseshifts of a Schroedinger wave packet bouncing off a hard surface in the gravitational field of the Earth. Measurements with ultracold neutrons will offer a sensitivity to Newton's law or hypothetical short-ranged interactions, which is about 21 orders of magnitude below the energy scale of electromagnetism.

  7. Quantitative modeling of multiscale neural activity

    NASA Astrophysics Data System (ADS)

    Robinson, Peter A.; Rennie, Christopher J.

    2007-01-01

    The electrical activity of the brain has been observed for over a century and is widely used to probe brain function and disorders, chiefly through the electroencephalogram (EEG) recorded by electrodes on the scalp. However, the connections between physiology and EEGs have been chiefly qualitative until recently, and most uses of the EEG have been based on phenomenological correlations. A quantitative mean-field model of brain electrical activity is described that spans the range of physiological and anatomical scales from microscopic synapses to the whole brain. Its parameters measure quantities such as synaptic strengths, signal delays, cellular time constants, and neural ranges, and are all constrained by independent physiological measurements. Application of standard techniques from wave physics allows successful predictions to be made of a wide range of EEG phenomena, including time series and spectra, evoked responses to stimuli, dependence on arousal state, seizure dynamics, and relationships to functional magnetic resonance imaging (fMRI). Fitting to experimental data also enables physiological parameters to be infered, giving a new noninvasive window into brain function, especially when referenced to a standardized database of subjects. Modifications of the core model to treat mm-scale patchy interconnections in the visual cortex are also described, and it is shown that resulting waves obey the Schroedinger equation. This opens the possibility of classical cortical analogs of quantum phenomena.

  8. Neutron-antineutron oscillations in nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dover, C.B.; Gal, A.; Richard, J.M.

    1983-03-01

    We present calculations of the neutron-antineutron (n-n-bar) annihilation lifetime T in deuterium, /sup 16/O, and /sup 56/Fe in terms of the free-space oscillation time tau/sub n/n-bar. The coupled Schroedinger equations for the n and n-bar wave functions in a nucleus are solved numerically, using a realistic shell-model potential which fits the empirical binding energies of the neu- p tron orbits, and a complex n-bar-nucleus optical potential obtained from fits to p-bar-atom level shifts. Most previous estimates of T in nuclei, which exhibit large variations, are found to be quite inaccurate. When the nuclear-physics aspects of the problem are handled properlymore » (in particular, the finite neutron binding, the nuclear radius, and the surface diffuseness), the results are found to be rather stable with respect to allowable changes in the parameters of the nuclear model. We conclude that experimental limits on T in nuclei can be used to give reasonably precise constraints on tau/sub n/n-bar: T>10/sup 30/ or 10/sup 31/ yr leads to tau/sub n/n-bar>(1.5--2) x 10/sup 7/ or (5--6) x 10/sup 7/ sec, respectively.« less

  9. Thomas-Fermi approximation for a condensate with higher-order interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thoegersen, M.; Jensen, A. S.; Zinner, N. T.

    We consider the ground state of a harmonically trapped Bose-Einstein condensate within the Gross-Pitaevskii theory including the effective-range corrections for a two-body zero-range potential. The resulting nonlinear Schroedinger equation is solved analytically in the Thomas-Fermi approximation neglecting the kinetic-energy term. We present results for the chemical potential and the condensate profiles, discuss boundary conditions, and compare to the usual Thomas-Fermi approach. We discuss several ways to increase the influence of effective-range corrections in experiment with magnetically tunable interactions. The level of tuning required could be inside experimental reach in the near future.

  10. Theoretical study of dissociative recombination of Cl{sub 2}{sup +}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Mingwu; Graduate School of Chinese Academy of Sciences, Beijing 100039; Department of Physics, Stockholm University, S-106 91 Stockholm

    Theoretical studies of low-energy electron collisions with Cl{sub 2}{sup +} leading to direct dissociative recombination are presented. The relevant potential energy curves and autoionization widths are calculated by combining electron scattering calculations using the complex Kohn variational method with multireference configuration interaction structure calculations. The dynamics on the four lowest resonant states of all symmetries is studied by the solution of a driven Schroedinger equation. The thermal rate coefficient for dissociative recombination of Cl{sub 2}{sup +} is calculated and the influence on the thermal rate coefficient from vibrational excited target ions is investigated.

  11. Dark soliton interaction of spinor Bose-Einstein condensates in an optical lattice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Zaidong; Li Qiuyan

    2007-08-15

    We study the magnetic soliton dynamics of spinor Bose-Einstein condensates in an optical lattice which results in an effective Hamiltonian of anisotropic pseudospin chain. An equation of nonlinear Schroedinger type is derived and exact magnetic soliton solutions are obtained analytically by means of Hirota method. Our results show that the critical external field is needed for creating the magnetic soliton in spinor Bose-Einstein condensates. The soliton size, velocity and shape frequency can be controlled in practical experiment by adjusting the magnetic field. Moreover, the elastic collision of two solitons is investigated in detail.

  12. The soliton transform and a possible application to nonlinear Alfven waves in space

    NASA Technical Reports Server (NTRS)

    Hada, T.; Hamilton, R. L.; Kennel, C. F.

    1993-01-01

    The inverse scattering transform (IST) based on the derivative nonlinear Schroedinger (DNLS) equation is applied to a complex time series of nonlinear Alfven wave data generated by numerical simulation. The IST describes the long-time evolution of quasi-parallel Alfven waves more efficiently than the Fourier transform, which is adapted to linear rather than nonlinear problems. When dissipation is added, so the conditions for the validity of the DNLS are not strictly satisfied, the IST continues to provide a compact description of the wavefield in terms of a small number of decaying envelope solitons.

  13. Integrability and structural stability of solutions to the Ginzburg-Landau equation

    NASA Technical Reports Server (NTRS)

    Keefe, Laurence R.

    1986-01-01

    The integrability of the Ginzburg-Landau equation is studied to investigate if the existence of chaotic solutions found numerically could have been predicted a priori. The equation is shown not to possess the Painleveproperty, except for a special case of the coefficients that corresponds to the integrable, nonlinear Schroedinger (NLS) equation. Regarding the Ginzburg-Landau equation as a dissipative perturbation of the NLS, numerical experiments show all but one of a family of two-tori solutions, possessed by the NLS under particular conditions, to disappear under real perturbations to the NLS coefficients of O(10 to the -6th).

  14. Transport methods and interactions for space radiations

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Townsend, Lawrence W.; Schimmerling, Walter S.; Khandelwal, Govind S.; Khan, Ferdous S.; Nealy, John E.; Cucinotta, Francis A.; Simonsen, Lisa C.; Shinn, Judy L.; Norbury, John W.

    1991-01-01

    A review of the program in space radiation protection at the Langley Research Center is given. The relevant Boltzmann equations are given with a discussion of approximation procedures for space applications. The interaction coefficients are related to solution of the many-body Schroedinger equation with nuclear and electromagnetic forces. Various solution techniques are discussed to obtain relevant interaction cross sections with extensive comparison with experiments. Solution techniques for the Boltzmann equations are discussed in detail. Transport computer code validation is discussed through analytical benchmarking, comparison with other codes, comparison with laboratory experiments and measurements in space. Applications to lunar and Mars missions are discussed.

  15. Manifold alignment with Schroedinger eigenmaps

    NASA Astrophysics Data System (ADS)

    Johnson, Juan E.; Bachmann, Charles M.; Cahill, Nathan D.

    2016-05-01

    The sun-target-sensor angle can change during aerial remote sensing. In an attempt to compensate BRDF effects in multi-angular hyperspectral images, the Semi-Supervised Manifold Alignment (SSMA) algorithm pulls data from similar classes together and pushes data from different classes apart. SSMA uses Laplacian Eigenmaps (LE) to preserve the original geometric structure of each local data set independently. In this paper, we replace LE with Spatial-Spectral Schoedinger Eigenmaps (SSSE) which was designed to be a semisupervised enhancement to the to extend the SSMA methodology and improve classification of multi-angular hyperspectral images captured over Hog Island in the Virginia Coast Reserve.

  16. Improved phase shift approach to the energy correction of the infinite order sudden approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, B.; Eno, L.; Rabitz, H.

    1980-07-15

    A new method is presented for obtaining energy corrections to the infinite order sudden (IOS) approximation by incorporating the effect of the internal molecular Hamiltonian into the IOS wave function. This is done by utilizing the JWKB approximation to transform the Schroedinger equation into a differential equation for the phase. It is found that the internal Hamiltonian generates an effective potential from which a new improved phase shift is obtained. This phase shift is then used in place of the IOS phase shift to generate new transition probabilities. As an illustration the resulting improved phase shift (IPS) method is appliedmore » to the Secrest--Johnson model for the collinear collision of an atom and diatom. In the vicinity of the sudden limit, the IPS method gives results for transition probabilities, P/sub n/..-->..n+..delta..n, in significantly better agreement with the 'exact' close coupling calculations than the IOS method, particularly for large ..delta..n. However, when the IOS results are not even qualitatively correct, the IPS method is unable to satisfactorily provide improvements.« less

  17. Efficient variable time-stepping scheme for intense field-atom interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cerjan, C.; Kosloff, R.

    1993-03-01

    The recently developed Residuum method [Tal-Ezer, Kosloff, and Cerjan, J. Comput. Phys. 100, 179 (1992)], a Krylov subspace technique with variable time-step integration for the solution of the time-dependent Schroedinger equation, is applied to the frequently used soft Coulomb potential in an intense laser field. This one-dimensional potential has asymptotic Coulomb dependence with a softened'' singularity at the origin; thus it models more realistic phenomena. Two of the more important quantities usually calculated in this idealized system are the photoelectron and harmonic photon generation spectra. These quantities are shown to be sensitive to the choice of a numerical integration scheme:more » some spectral features are incorrectly calculated or missing altogether. Furthermore, the Residuum method allows much larger grid spacings for equivalent or higher accuracy in addition to the advantages of variable time stepping. Finally, it is demonstrated that enhanced high-order harmonic generation accompanies intense field stabilization and that preparation of the atom in an intermediate Rydberg state leads to stabilization at much lower laser intensity.« less

  18. Axion Induced Oscillating Electric Dipole Moment of the Electron

    DOE PAGES

    Hill, Christopher T.

    2016-01-12

    A cosmic axion, via the electromagnetic anomaly, induces an oscillating electric dipole for the electron of frequency ma and strength ~(few) x 10 -32 e-cm, two orders of magnitude above the nucleon, and within a few orders of magnitude of the present standard model constant limit. We give a detailed study of this phenomenon via the interaction of the cosmic axion, through the electromagnetic anomaly, with particular emphasis on the decoupling limit of the axion, ∂ ta(t) ∝ m α → 0. The analysis is subtle, and we find the general form of the action involves a local contact interactionmore » and a nonlocal contribution, analogous to the “transverse current” in QED, that enforces the decoupling limit. We carefully derive the effective action in the Pauli-Schroedinger non-relativistic formalism, and in Georgi’s heavy quark formalism adapted to the “heavy electron” (m e >> m a). We compute the electric dipole radiation emitted by free electrons, magnets and currents, immersed in the cosmic axion field, and discuss experimental configurations that may yield a detectable signal.« less

  19. Axion Induced Oscillating Electric Dipole Moment of the Electron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hill, Christopher T.

    A cosmic axion, via the electromagnetic anomaly, induces an oscillating electric dipole for the electron of frequency ma and strength ~(few) x 10 -32 e-cm, two orders of magnitude above the nucleon, and within a few orders of magnitude of the present standard model constant limit. We give a detailed study of this phenomenon via the interaction of the cosmic axion, through the electromagnetic anomaly, with particular emphasis on the decoupling limit of the axion, ∂ ta(t) ∝ m α → 0. The analysis is subtle, and we find the general form of the action involves a local contact interactionmore » and a nonlocal contribution, analogous to the “transverse current” in QED, that enforces the decoupling limit. We carefully derive the effective action in the Pauli-Schroedinger non-relativistic formalism, and in Georgi’s heavy quark formalism adapted to the “heavy electron” (m e >> m a). We compute the electric dipole radiation emitted by free electrons, magnets and currents, immersed in the cosmic axion field, and discuss experimental configurations that may yield a detectable signal.« less

  20. Prolongation structures of nonlinear evolution equations. II

    NASA Technical Reports Server (NTRS)

    Estabrook, F. B.; Wahlquist, H. D.

    1976-01-01

    The prolongation structure of a closed ideal of exterior differential forms is further discussed, and its use illustrated by application to an ideal (in six dimensions) representing the cubically nonlinear Schroedinger equation. The prolongation structure in this case is explicitly given, and recurrence relations derived which support the conjecture that the structure is open - i.e., does not terminate as a set of structure relations of a finite-dimensional Lie group. We introduce the use of multiple pseudopotentials to generate multiple Baecklund transformation, and derive the double Baecklund transformation. This symmetric transformation concisely expresses the (usually conjectured) theorem of permutability, which must consequently apply to all solutions irrespective of asymptotic constraints.

  1. Comment on 'Nonlinear band structure in Bose-Einstein condensates: Nonlinear Schroedinger equation with a Kronig-Penney potential'

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Danshita, Ippei; Department of Physics, Waseda University, 3-4-1 Okubo, Shinjuku, Tokyo 169-8555; Tsuchiya, Shunji

    2007-07-15

    In their recent paper [Phys. Rev. A 71, 033622 (2005)], Seaman et al. studied Bloch states of the condensate wave function in a Kronig-Penney potential and calculated the band structure. They argued that the effective mass is always positive when a swallowtail energy loop is present in the band structure. In this Comment, we reexamine their argument by actually calculating the effective mass. It is found that there exists a region where the effective mass is negative even when a swallowtail is present. Based on this fact, we discuss the interpretation of swallowtails in terms of superfluidity.

  2. Quantum dynamics by the constrained adiabatic trajectory method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leclerc, A.; Jolicard, G.; Guerin, S.

    2011-03-15

    We develop the constrained adiabatic trajectory method (CATM), which allows one to solve the time-dependent Schroedinger equation constraining the dynamics to a single Floquet eigenstate, as if it were adiabatic. This constrained Floquet state (CFS) is determined from the Hamiltonian modified by an artificial time-dependent absorbing potential whose forms are derived according to the initial conditions. The main advantage of this technique for practical implementation is that the CFS is easy to determine even for large systems since its corresponding eigenvalue is well isolated from the others through its imaginary part. The properties and limitations of the CATM are exploredmore » through simple examples.« less

  3. Reduction of the equation for lower hybrid waves in a plasma to a nonlinear Schroedinger equation

    NASA Technical Reports Server (NTRS)

    Karney, C. F. F.

    1977-01-01

    Equations describing the nonlinear propagation of waves in an anisotropic plasma are rarely exactly soluble. However it is often possible to make approximations that reduce the exact equations into a simpler equation. The use of MACSYMA to make such approximations, and so reduce the equation describing lower hybrid waves into the nonlinear Schrodinger equation which is soluble by the inverse scattering method is demonstrated. MACSYMA is used at several stages in the calculation only because there is a natural division between calculations that are easiest done by hand, and those that are easiest done by machine.

  4. Phase space quantum mechanics - Direct

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nasiri, S.; Sobouti, Y.; Taati, F.

    2006-09-15

    Conventional approach to quantum mechanics in phase space (q,p), is to take the operator based quantum mechanics of Schroedinger, or an equivalent, and assign a c-number function in phase space to it. We propose to begin with a higher level of abstraction, in which the independence and the symmetric role of q and p is maintained throughout, and at once arrive at phase space state functions. Upon reduction to the q- or p-space the proposed formalism gives the conventional quantum mechanics, however, with a definite rule for ordering of factors of noncommuting observables. Further conceptual and practical merits of themore » formalism are demonstrated throughout the text.« less

  5. The FEM-R-Matrix Approach: Use of Mixed Finite Element and Gaussian Basis Sets for Electron Molecule Collisions

    NASA Technical Reports Server (NTRS)

    Thuemmel, Helmar T.; Huo, Winifred M.; Langhoff, Stephen R. (Technical Monitor)

    1995-01-01

    For the calculation of electron molecule collision cross sections R-matrix methods automatically take advantage of the division of configuration space into an inner region (I) bounded by radius tau b, where the scattered electron is within the molecular charge cloud and the system is described by an correlated Configuration Interaction (CI) treatment in close analogy to bound state calculations, and an outer region (II) where the scattered electron moves in the long-range multipole potential of the target and efficient analytic methods can be used for solving the asymptotic Schroedinger equation plus boundary conditions.

  6. High-order harmonic generation by atoms in a few-cycle laser pulse: Carrier-envelope phase and many-electron effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frolov, M. V.; Manakov, N. L.; Silaev, A. A.

    2011-02-15

    Analytic formulas describing high-order harmonic generation (HHG) by atoms in a short laser pulse are obtained quantum mechanically in the tunneling limit. These results provide analytic expressions of the three-step HHG scenario, as well as of the returning electron wave packet, in a few-cycle pulse. Our results agree well with those of numerical solutions of the time-dependent Schroedinger equation for the H atom, while for Xe they predict many-electron atomic dynamics features in few-cycle HHG spectra and significant dependence of these features on the carrier-envelope phase of a laser pulse.

  7. Application of wave mechanics theory to fluid dynamics problems: Fundamentals

    NASA Technical Reports Server (NTRS)

    Krzywoblocki, M. Z. V.

    1974-01-01

    The application of the basic formalistic elements of wave mechanics theory is discussed. The theory is used to describe the physical phenomena on the microscopic level, the fluid dynamics of gases and liquids, and the analysis of physical phenomena on the macroscopic (visually observable) level. The practical advantages of relating the two fields of wave mechanics and fluid mechanics through the use of the Schroedinger equation constitute the approach to this relationship. Some of the subjects include: (1) fundamental aspects of wave mechanics theory, (2) laminarity of flow, (3) velocity potential, (4) disturbances in fluids, (5) introductory elements of the bifurcation theory, and (6) physiological aspects in fluid dynamics.

  8. 2D Quantum Transport Modeling in Nanoscale MOSFETs

    NASA Technical Reports Server (NTRS)

    Svizhenko, Alexei; Anantram, M. P.; Govindan, T. R.; Biegel, Bryan

    2001-01-01

    With the onset of quantum confinement in the inversion layer in nanoscale MOSFETs, behavior of the resonant level inevitably determines all device characteristics. While most classical device simulators take quantization into account in some simplified manner, the important details of electrostatics are missing. Our work addresses this shortcoming and provides: (a) a framework to quantitatively explore device physics issues such as the source-drain and gate leakage currents, DIBL, and threshold voltage shift due to quantization, and b) a means of benchmarking quantum corrections to semiclassical models (such as density- gradient and quantum-corrected MEDICI). We have developed physical approximations and computer code capable of realistically simulating 2-D nanoscale transistors, using the non-equilibrium Green's function (NEGF) method. This is the most accurate full quantum model yet applied to 2-D device simulation. Open boundary conditions, oxide tunneling and phase-breaking scattering are treated on equal footing. Electrons in the ellipsoids of the conduction band are treated within the anisotropic effective mass approximation. Quantum simulations are focused on MIT 25, 50 and 90 nm "well- tempered" MOSFETs and compared to classical and quantum corrected models. The important feature of quantum model is smaller slope of Id-Vg curve and consequently higher threshold voltage. These results are quantitatively consistent with I D Schroedinger-Poisson calculations. The effect of gate length on gate-oxide leakage and sub-threshold current has been studied. The shorter gate length device has an order of magnitude smaller current at zero gate bias than the longer gate length device without a significant trade-off in on-current. This should be a device design consideration.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanna, T.; Vijayajayanthi, M.; Lakshmanan, M.

    The bright soliton solutions of the mixed coupled nonlinear Schroedinger equations with two components (2-CNLS) with linear self- and cross-coupling terms have been obtained by identifying a transformation that transforms the corresponding equation to the integrable mixed 2-CNLS equations. The study on the collision dynamics of bright solitons shows that there exists periodic energy switching, due to the coupling terms. This periodic energy switching can be controlled by the new type of shape changing collisions of bright solitons arising in a mixed 2-CNLS system, characterized by intensity redistribution, amplitude dependent phase shift, and relative separation distance. We also point outmore » that this system exhibits large periodic intensity switching even with very small linear self-coupling strengths.« less

  10. A method of solving simple harmonic oscillator Schroedinger equation

    NASA Technical Reports Server (NTRS)

    Maury, Juan Carlos F.

    1995-01-01

    A usual step in solving totally Schrodinger equation is to try first the case when dimensionless position independent variable w is large. In this case the Harmonic Oscillator equation takes the form (d(exp 2)/dw(exp 2) - w(exp 2))F = 0, and following W.K.B. method, it gives the intermediate corresponding solution F = exp(-w(exp 2)/2), which actually satisfies exactly another equation, (d(exp 2)/dw(exp 2) + 1 - w(exp 2))F = 0. We apply a different method, useful in anharmonic oscillator equations, similar to that of Rampal and Datta, and although it is slightly more complicated however it is also more general and systematic.

  11. Two- and three-photon ionization in the noble gases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGuire, E.J.

    1981-08-01

    By using a characteristic Green's function for an exactly solvable Schroedinger equation with an approximation to the central potential of Hermann and Skillman, the cross section for nonresonant two- and three-photon ionization of Ne, Ar, Kr, and Xe were calculated in jl coupling. Expressions for cross sections in jl coupling are given. Comparison with the Ar two-photon cross section of Pindzola and Kelly, calculated using the many-body theory, the dipole-length approximation, and LS coupling shows a disagreement of as much as a factor of 2. The disagreement appears to arise from distortion introduced by shifting the Green's-function resonances to experimentalmore » values.« less

  12. Exact differential equation for the density and ionization energy of a many-particle system

    NASA Technical Reports Server (NTRS)

    Levy, M.; Perdew, J. P.; Sahni, V.

    1984-01-01

    The present investigation is concerned with relations studied by Hohenberg and Kohn (1964) and Kohn and Sham (1965). The properties of a ground-state many-electron system are determined by the electron density. The correct differential equation for the density, as dictated by density-functional theory, is presented. It is found that the ground-state density n of a many-electron system obeys a Schroedinger-like differential equation which may be solved by standard Kohn-Sham programs. Results are connected to the traditional exact Kohn-Sham theory. It is pointed out that the results of the current investigations are readily extended to spin-density functional theory.

  13. Nonphasematched broadband THz amplification and reshaping in a dispersive chi(3) medium.

    PubMed

    Koys, Martin; Noskovicova, Eva; Velic, Dusan; Lorenc, Dusan

    2017-06-12

    We theoretically investigate non-phasematched broadband THz amplification in dispersive chi(3) media. A short 100 fs pump pulse is interacting with a temporally matched second harmonic pulse and a weak THz signal through the four wave mixing process and a significant broadband THz amplification and reshaping is observed. The pulse evolution dynamics is explored by numerically solving a set of generalized Nonlinear Schroedinger equations. The influence of incident pulse chirp, pulse duration and the role of wavelength, THz seed frequency and losses are evaluated separately. It is found that a careful choice of incident parameters can provide a broadband THz output and/or a significant increase of THz peak power.

  14. Distribution-valued initial data for the complex Ginzburg-Landau equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levermore, C.D.; Oliver, M.

    1997-11-01

    The generalized complex Ginzburg-Landau (CGL) equation with a nonlinearity of order 2{sigma} + 1 in d spatial dimensions has a unique local classical solution for distributional initial data in the Sobolev space H{sup q} provided that q > d/2 - 1/{sigma}. This result directly corresponds to a theorem for the nonlinear Schroedinger (NLS) equation which has been proved by Cazenave and Weissler in 1990. While the proof in the NLS case relies on Besov space techniques, it is shown here that for the CGL equation, the smoothing properties of the linear semigroup can be eased to obtain an almost optimalmore » result by elementary means. 1 fig.« less

  15. Subsonic and Supersonic Effects in Bose-Einstein Condensate

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2003-01-01

    A paper presents a theoretical investigation of subsonic and supersonic effects in a Bose-Einstein condensate (BEC). The BEC is represented by a time-dependent, nonlinear Schroedinger equation that includes terms for an external confining potential term and a weak interatomic repulsive potential proportional to the number density of atoms. From this model are derived Madelung equations, which relate the quantum phase with the number density, and which are used to represent excitations propagating through the BEC. These equations are shown to be analogous to the classical equations of flow of an inviscid, compressible fluid characterized by a speed of sound (g/Po)1/2, where g is the coefficient of the repulsive potential and Po is the unperturbed mass density of the BEC. The equations are used to study the effects of a region of perturbation moving through the BEC. The excitations created by a perturbation moving at subsonic speed are found to be described by a Laplace equation and to propagate at infinite speed. For a supersonically moving perturbation, the excitations are found to be described by a wave equation and to propagate at finite speed inside a Mach cone.

  16. Multigrid techniques for nonlinear eigenvalue probems: Solutions of a nonlinear Schroedinger eigenvalue problem in 2D and 3D

    NASA Technical Reports Server (NTRS)

    Costiner, Sorin; Taasan, Shlomo

    1994-01-01

    This paper presents multigrid (MG) techniques for nonlinear eigenvalue problems (EP) and emphasizes an MG algorithm for a nonlinear Schrodinger EP. The algorithm overcomes the mentioned difficulties combining the following techniques: an MG projection coupled with backrotations for separation of solutions and treatment of difficulties related to clusters of close and equal eigenvalues; MG subspace continuation techniques for treatment of the nonlinearity; an MG simultaneous treatment of the eigenvectors at the same time with the nonlinearity and with the global constraints. The simultaneous MG techniques reduce the large number of self consistent iterations to only a few or one MG simultaneous iteration and keep the solutions in a right neighborhood where the algorithm converges fast.

  17. Steering, Entanglement, Nonlocality, and the EPR Paradox

    NASA Astrophysics Data System (ADS)

    Wiseman, Howard; Jones, Steve; Andrew, Doherty

    2007-06-01

    The concept of steering was introduced by Schroedinger in 1935 as a generalization of the EPR paradox for arbitrary pure bipartite entangled states and arbitrary measurements by one party. Until now, it has never been rigorously defined, so it has not been known (for example) what mixed states are steerable (that is, can be used to exhibit steering). We provide an operational definition, from which we prove (by considering Werner states and Isotropic states) that steerable states are a strict subset of the entangled states, and a strict superset of the states that can exhibit Bell-nonlocality. For arbitrary bipartite Gaussian states we derive a linear matrix inequality that decides the question of steerability via Gaussian measurements, and we relate this to the original EPR paradox.

  18. Scattering and bound states of spinless particles in a mixed vector-scalar smooth step potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, M.G.; Castro, A.S. de

    2009-11-15

    Scattering and bound states for a spinless particle in the background of a kink-like smooth step potential, added with a scalar uniform background, are considered with a general mixing of vector and scalar Lorentz structures. The problem is mapped into the Schroedinger-like equation with an effective Rosen-Morse potential. It is shown that the scalar uniform background present subtle and trick effects for the scattering states and reveals itself a high-handed element for formation of bound states. In that process, it is shown that the problem of solving a differential equation for the eigenenergies is transmuted into the simpler and moremore » efficient problem of solving an irrational algebraic equation.« less

  19. Comparison of the Chebyshev Method and the Generalized Crank-Nicholson Method for time Propagation in Quantum Mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Formanek, Martin; Vana, Martin; Houfek, Karel

    2010-09-30

    We compare efficiency of two methods for numerical solution of the time-dependent Schroedinger equation, namely the Chebyshev method and the recently introduced generalized Crank-Nicholson method. As a testing system the free propagation of a particle in one dimension is used. The space discretization is based on the high-order finite diferences to approximate accurately the kinetic energy operator in the Hamiltonian. We show that the choice of the more effective method depends on how many wave functions must be calculated during the given time interval to obtain relevant and reasonably accurate information about the system, i.e. on the choice of themore » time step.« less

  20. Calculation of the Full Scattering Amplitude without Partial Wave Decomposition. 2; Inclusion of Exchange

    NASA Technical Reports Server (NTRS)

    Shertzer, Janine; Temkin, Aaron

    2004-01-01

    The development of a practical method of accurately calculating the full scattering amplitude, without making a partial wave decomposition is continued. The method is developed in the context of electron-hydrogen scattering, and here exchange is dealt with by considering e-H scattering in the static exchange approximation. The Schroedinger equation in this approximation can be simplified to a set of coupled integro-differential equations. The equations are solved numerically for the full scattering wave function. The scattering amplitude can most accurately be calculated from an integral expression for the amplitude; that integral can be formally simplified, and then evaluated using the numerically determined wave function. The results are essentially identical to converged partial wave results.

  1. Attosecond pulse carrier-envelope phase effects on ionized electron momentum and energy distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, L.-Y.; Starace, Anthony F.

    2007-10-15

    We analyze carrier-envelope phase (CEP) effects on electron wave-packet momentum and energy spectra produced by one or two few-cycle attosecond xuv pulses. The few-cycle attosecond pulses are assumed to have arbitrary phases. We predict CEP effects on ionized electron wave-packet momentum distributions produced by attosecond pulses having durations comparable to those obtained by Sansone et al. [Science 314, 443 (2006)]. The onset of significant CEP effects is predicted to occur for attosecond pulse field strengths close to those possible with current experimental capabilities. Our results are based on single-active-electron solutions of the three-dimensional, time-dependent Schroedinger equation including atomic potentials appropriatemore » for the H and He atoms.« less

  2. 2D Quantum Mechanical Study of Nanoscale MOSFETs

    NASA Technical Reports Server (NTRS)

    Svizhenko, Alexei; Anantram, M. P.; Govindan, T. R.; Biegel, B.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    With the onset of quantum confinement in the inversion layer in nanoscale MOSFETs, behavior of the resonant level inevitably determines all device characteristics. While most classical device simulators take quantization into account in some simplified manner, the important details of electrostatics are missing. Our work addresses this shortcoming and provides: (a) a framework to quantitatively explore device physics issues such as the source-drain and gate leakage currents, DIBL, and threshold voltage shift due to quantization, and b) a means of benchmarking quantum corrections to semiclassical models (such as density-gradient and quantum-corrected MEDICI). We have developed physical approximations and computer code capable of realistically simulating 2-D nanoscale transistors, using the non-equilibrium Green's function (NEGF) method. This is the most accurate full quantum model yet applied to 2-D device simulation. Open boundary conditions and oxide tunneling are treated on an equal footing. Electrons in the ellipsoids of the conduction band are treated within the anisotropic effective mass approximation. We present the results of our simulations of MIT 25, 50 and 90 nm "well-tempered" MOSFETs and compare them to those of classical and quantum corrected models. The important feature of quantum model is smaller slope of Id-Vg curve and consequently higher threshold voltage. Surprisingly, the self-consistent potential profile shows lower injection barrier in the channel in quantum case. These results are qualitatively consistent with ID Schroedinger-Poisson calculations. The effect of gate length on gate-oxide leakage and subthreshold current has been studied. The shorter gate length device has an order of magnitude smaller current at zero gate bias than the longer gate length device without a significant trade-off in on-current. This should be a device design consideration.

  3. Levy-Student distributions for halos in accelerator beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cufaro Petroni, Nicola; De Martino, Salvatore; De Siena, Silvio

    2005-12-15

    We describe the transverse beam distribution in particle accelerators within the controlled, stochastic dynamical scheme of stochastic mechanics (SM) which produces time reversal invariant diffusion processes. This leads to a linearized theory summarized in a Schroedinger-like (SL) equation. The space charge effects have been introduced in recent papers by coupling this S-L equation with the Maxwell equations. We analyze the space-charge effects to understand how the dynamics produces the actual beam distributions, and in particular we show how the stationary, self-consistent solutions are related to the (external and space-charge) potentials both when we suppose that the external field is harmonicmore » (constant focusing), and when we a priori prescribe the shape of the stationary solution. We then proceed to discuss a few other ideas by introducing generalized Student distributions, namely, non-Gaussian, Levy infinitely divisible (but not stable) distributions. We will discuss this idea from two different standpoints: (a) first by supposing that the stationary distribution of our (Wiener powered) SM model is a Student distribution; (b) by supposing that our model is based on a (non-Gaussian) Levy process whose increments are Student distributed. We show that in the case (a) the longer tails of the power decay of the Student laws and in the case (b) the discontinuities of the Levy-Student process can well account for the rare escape of particles from the beam core, and hence for the formation of a halo in intense beams.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazzarella, G.; Toigo, F.; Salasnich, L.

    We consider a bosonic Josephson junction made of N ultracold and dilute atoms confined by a quasi-one-dimensional double-well potential within the two-site Bose-Hubbard model framework. The behavior of the system is investigated at zero temperature by varying the interatomic interaction from the strongly attractive regime to the repulsive one. We show that the ground state exhibits a crossover from a macroscopic Schroedinger-cat state to a separable Fock state through an atomic coherent regime. By diagonalizing the Bose-Hubbard Hamiltonian we characterize the emergence of the macroscopic cat states by calculating the Fisher information F, the coherence by means of the visibilitymore » {alpha} of the interference fringes in the momentum distribution, and the quantum correlations by using the entanglement entropy S. Both Fisher information and visibility are shown to be related to the ground-state energy by employing the Hellmann-Feynman theorem. This result, together with a perturbative calculation of the ground-state energy, allows simple analytical formulas for F and {alpha} to be obtained over a range of interactions, in excellent agreement with the exact diagonalization of the Bose-Hubbard Hamiltonian. In the attractive regime the entanglement entropy attains values very close to its upper limit for a specific interaction strength lying in the region where coherence is lost and self-trapping sets in.« less

  5. Semiclassical theory of the self-consistent vibration-rotation fields and its application to the bending-rotation interaction in the H{sub 2}O molecule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skalozub, A.S.; Tsaune, A.Ya.

    1994-12-01

    A new approach for analyzing the highly excited vibration-rotation (VR) states of nonrigid molecules is suggested. It is based on the separation of the vibrational and rotational terms in the molecular VR Hamiltonian by introducing periodic auxiliary fields. These fields transfer different interactions within a molecule and are treated in terms of the mean-field approximation. As a result, the solution of the stationary Schroedinger equation with the VR Hamiltonian amounts to a quantization of the Berry phase in a problem of the molecular angular-momentum motion in a certain periodic VR field (rotational problem). The quantization procedure takes into account themore » motion of the collective vibrational variables in the appropriate VR potentials (vibrational problem). The quantization rules, the mean-field configurations of auxiliary interactions, and the solutions to the Schrodinger equations for the vibrational and rotational problems are self-consistently connected with one another. The potentialities of the theory are demonstrated by the bending-rotation interaction modeled by the Bunker-Landsberg potential function in the H{sub 2} molecule. The calculations are compared with both the results of the exact computations and those of other approximate methods. 32 refs., 4 tabs.« less

  6. Applicability of modified effective-range theory to positron-atom and positron-molecule scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Idziaszek, Zbigniew; Karwasz, Grzegorz; Instytut Fizyki, Uniwersytet Mikolaja Kopernika, 87-100 Torun

    2006-06-15

    We analyze low-energy scattering of positrons on Ar atoms and N{sub 2} molecules using the modified effective-range theory (MERT) developed by O'Malley, et al. [J. Math. Phys. 2, 491 (1961)]. We use the formulation of MERT based on exact solutions of the Schroedinger equation with polarization potential rather than low-energy expansions of phase shifts into momentum series. We show that MERT describes the experimental data well, provided that effective-range expansion is performed both for s- and p-wave scattering, which dominate in the considered regime of positron energies (0.4-2 eV). We estimate the values of the s-wave scattering length and themore » effective range for e{sup +}-Ar and e{sup +}-N{sub 2} collisions.« less

  7. Analytical solutions for the dynamics of two trapped interacting ultracold atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Idziaszek, Zbigniew; Calarco, Tommaso; CNR-INFM BEC Center, I-38050 Povo

    2006-08-15

    We discuss exact solutions of the Schroedinger equation for the system of two ultracold atoms confined in an axially symmetric harmonic potential. We investigate different geometries of the trapping potential, in particular we study the properties of eigenenergies and eigenfunctions for quasi-one-dimensional and quasi-two-dimensional traps. We show that the quasi-one-dimensional and the quasi-two-dimensional regimes for two atoms can be already realized in the traps with moderately large (or small) ratios of the trapping frequencies in the axial and the transverse directions. Finally, we apply our theory to Feshbach resonances for trapped atoms. Introducing in our description an energy-dependent scattering lengthmore » we calculate analytically the eigenenergies for two trapped atoms in the presence of a Feshbach resonance.« less

  8. Modulational-instability-induced supercontinuum generation with saturable nonlinear response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raja, R. Vasantha Jayakantha; Porsezian, K.; Nithyanandan, K.

    2010-07-15

    We theoretically investigate the supercontinuum generation (SCG) on the basis of modulational instability (MI) in liquid-core photonic crystal fibers (LCPCF) with CS{sub 2}-filled central core. The effect of saturable nonlinearity of LCPCF on SCG in the femtosecond regime is studied using an appropriately modified nonlinear Schroedinger equation. We also compare the MI induced spectral broadening with SCG obtained by soliton fission. To analyze the quality of the pulse broadening, we study the coherence of the SC pulse numerically. It is evident from the numerical simulation that the response of the saturable nonlinearity suppresses the broadening of the pulse. We alsomore » observe that the MI induced SCG in the presence of saturable nonlinearity degrades the coherence of the SCG pulse when compared to unsaturated medium.« less

  9. High-performance dynamic quantum clustering on graphics processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wittek, Peter, E-mail: peterwittek@acm.org

    2013-01-15

    Clustering methods in machine learning may benefit from borrowing metaphors from physics. Dynamic quantum clustering associates a Gaussian wave packet with the multidimensional data points and regards them as eigenfunctions of the Schroedinger equation. The clustering structure emerges by letting the system evolve and the visual nature of the algorithm has been shown to be useful in a range of applications. Furthermore, the method only uses matrix operations, which readily lend themselves to parallelization. In this paper, we develop an implementation on graphics hardware and investigate how this approach can accelerate the computations. We achieve a speedup of up tomore » two magnitudes over a multicore CPU implementation, which proves that quantum-like methods and acceleration by graphics processing units have a great relevance to machine learning.« less

  10. QCD and Light-Front Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brodsky, Stanley J.; de Teramond, Guy F.; /SLAC /Southern Denmark U., CP3-Origins /Costa Rica U.

    2011-01-10

    AdS/QCD, the correspondence between theories in a dilaton-modified five-dimensional anti-de Sitter space and confining field theories in physical space-time, provides a remarkable semiclassical model for hadron physics. Light-front holography allows hadronic amplitudes in the AdS fifth dimension to be mapped to frame-independent light-front wavefunctions of hadrons in physical space-time. The result is a single-variable light-front Schroedinger equation which determines the eigenspectrum and the light-front wavefunctions of hadrons for general spin and orbital angular momentum. The coordinate z in AdS space is uniquely identified with a Lorentz-invariant coordinate {zeta} which measures the separation of the constituents within a hadron at equalmore » light-front time and determines the off-shell dynamics of the bound state wavefunctions as a function of the invariant mass of the constituents. The hadron eigenstates generally have components with different orbital angular momentum; e.g., the proton eigenstate in AdS/QCD with massless quarks has L = 0 and L = 1 light-front Fock components with equal probability. Higher Fock states with extra quark-anti quark pairs also arise. The soft-wall model also predicts the form of the nonperturbative effective coupling and its {beta}-function. The AdS/QCD model can be systematically improved by using its complete orthonormal solutions to diagonalize the full QCD light-front Hamiltonian or by applying the Lippmann-Schwinger method to systematically include QCD interaction terms. Some novel features of QCD are discussed, including the consequences of confinement for quark and gluon condensates. A method for computing the hadronization of quark and gluon jets at the amplitude level is outlined.« less

  11. Nonlinear Schroedinger Approximations for Partial Differential Equations with Quadratic and Quasilinear Terms

    NASA Astrophysics Data System (ADS)

    Cummings, Patrick

    We consider the approximation of solutions of two complicated, physical systems via the nonlinear Schrodinger equation (NLS). In particular, we discuss the evolution of wave packets and long waves in two physical models. Due to the complicated nature of the equations governing many physical systems and the in-depth knowledge we have for solutions of the nonlinear Schrodinger equation, it is advantageous to use approximation results of this kind to model these physical systems. The approximations are simple enough that we can use them to understand the qualitative and quantitative behavior of the solutions, and by justifying them we can show that the behavior of the approximation captures the behavior of solutions to the original equation, at least for long, but finite time. We first consider a model of the water wave equations which can be approximated by wave packets using the NLS equation. We discuss a new proof that both simplifies and strengthens previous justification results of Schneider and Wayne. Rather than using analytic norms, as was done by Schneider and Wayne, we construct a modified energy functional so that the approximation holds for the full interval of existence of the approximate NLS solution as opposed to a subinterval (as is seen in the analytic case). Furthermore, the proof avoids problems associated with inverting the normal form transform by working with a modified energy functional motivated by Craig and Hunter et al. We then consider the Klein-Gordon-Zakharov system and prove a long wave approximation result. In this case there is a non-trivial resonance that cannot be eliminated via a normal form transform. By combining the normal form transform for small Fourier modes and using analytic norms elsewhere, we can get a justification result on the order 1 over epsilon squared time scale.

  12. The Aharonov-Bohm effect and Tonomura et al. experiments: Rigorous results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballesteros, Miguel; Weder, Ricardo

    The Aharonov-Bohm effect is a fundamental issue in physics. It describes the physically important electromagnetic quantities in quantum mechanics. Its experimental verification constitutes a test of the theory of quantum mechanics itself. The remarkable experiments of Tonomura et al. ['Observation of Aharonov-Bohm effect by electron holography', Phys. Rev. Lett 48, 1443 (1982) and 'Evidence for Aharonov-Bohm effect with magnetic field completely shielded from electron wave', Phys. Rev. Lett 56, 792 (1986)] are widely considered as the only experimental evidence of the physical existence of the Aharonov-Bohm effect. Here we give the first rigorous proof that the classical ansatz of Aharonovmore » and Bohm of 1959 ['Significance of electromagnetic potentials in the quantum theory', Phys. Rev. 115, 485 (1959)], that was tested by Tonomura et al., is a good approximation to the exact solution to the Schroedinger equation. This also proves that the electron, that is, represented by the exact solution, is not accelerated, in agreement with the recent experiment of Caprez et al. in 2007 ['Macroscopic test of the Aharonov-Bohm effect', Phys. Rev. Lett. 99, 210401 (2007)], that shows that the results of the Tonomura et al. experiments can not be explained by the action of a force. Under the assumption that the incoming free electron is a Gaussian wave packet, we estimate the exact solution to the Schroedinger equation for all times. We provide a rigorous, quantitative error bound for the difference in norm between the exact solution and the Aharonov-Bohm Ansatz. Our bound is uniform in time. We also prove that on the Gaussian asymptotic state the scattering operator is given by a constant phase shift, up to a quantitative error bound that we provide. Our results show that for intermediate size electron wave packets, smaller than the ones used in the Tonomura et al. experiments, quantum mechanics predicts the results observed by Tonomura et al. with an error bound smaller than 10{sup -99}. It would be quite interesting to perform experiments with electron wave packets of intermediate size. Furthermore, we provide a physical interpretation of our error bound.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, S.; Lin, C.C.

    The absorption coefficients for the free-free transitions in collisions between slow electrons and neutral oxygen atoms have been calculated for wavelengths in the range of 1 to 30 [mu]m and temperatures between 5000 and 50 000 K. The wave functions of the unbound electron are the solutions of a one-electron Schroedinger-like continuum equation that includes the Coulomb, exchange, and polarization interactions with the oxygen atom. The polarization potential is determined by a first-principles calculation based on the method of polarized orbitals. Our absorption coefficients are in good agreement with those of John and Williams [J. Quant. Spectrosc. Radiat. Transfer 17,more » 169 (1977)], but are much smaller than the experimental data of Taylor and Caledonia [J. Quant. Spectrosc. Radiat. Transfer 9, 681 (1969)] and of Kung and Chang [J. Quant. Spectrosc. Radiat. Transfer 16, 579 (1976)].« less

  14. Ultrafast propagation of Schroedinger waves in absorbing media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delgado, F.; Muga, J.G.; Ruschhaupt, A.

    2004-02-01

    We show that the temporal peak of a quantum wave may arrive at different locations simultaneously in an absorbing medium. The arrival occurs at the lifetime of the particle in the medium from the instant when a point source with a sharp onset is turned on. We also identify other characteristic times. In particular, the 'traversal' or 'Buettiker-Landauer' time (which grows linearly with the distance to the source) for the Hermitian, non-absorbing case is substituted by several characteristic quantities in the absorbing case. The simultaneous arrival due to absorption, unlike the Hartman effect, occurs for carrier frequencies under or abovemore » the cutoff, and for arbitrarily large distances. It holds also in a relativistic generalization but limited by causality. A possible physical realization is proposed by illuminating a two-level atom with a detuned laser.« less

  15. Question 1: Origin of Life and the Living State

    NASA Astrophysics Data System (ADS)

    Kauffman, Stuart

    2007-10-01

    The aim of this article is to discuss four topics: First, the origin of molecular reproduction. Second, the origin of agency the capacity of a system to act on its own behalf. Agency is a stunning feature of human and some wider range of life. Third, to discuss a still poorly articulated feature of life noticed by the philosopher Immanuel Kant over 200 years ago: A self propagating organization of process. We have no theory for this aspect of life, yet it is central to life. Fourth, I will discuss constraints, as in Schroedinger’s aperiodic crystal (Schroedinger E, What is life? The physical aspect of the living cell, 1944), as information, part of the total non-equilibrium union of matter, energy, work, work cycles, constraints, and information that appear to comprise the living state.

  16. Free iterative-complement-interaction calculations of the hydrogen molecule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurokawa, Yusaku; Nakashima, Hiroyuki; Nakatsuji, Hiroshi

    2005-12-15

    The free iterative-complement-interaction (ICI) method based on the scaled Schroedinger equation proposed previously has been applied to the calculations of very accurate wave functions of the hydrogen molecule in an analytical expansion form. All the variables were determined with the variational principle by calculating the necessary integrals analytically. The initial wave function and the scaling function were changes to see the effects on the convergence speed of the ICI calculations. The free ICI wave functions that were generated automatically were different from the existing wave functions, and this difference was shown to be physically important. The best wave function reportedmore » in this paper seems to be the best worldwide in the literature from the variational point of view. The quality of the wave function was examined by calculating the nuclear and electron cusps.« less

  17. Quantum mechanical streamlines. I - Square potential barrier

    NASA Technical Reports Server (NTRS)

    Hirschfelder, J. O.; Christoph, A. C.; Palke, W. E.

    1974-01-01

    Exact numerical calculations are made for scattering of quantum mechanical particles hitting a square two-dimensional potential barrier (an exact analog of the Goos-Haenchen optical experiments). Quantum mechanical streamlines are plotted and found to be smooth and continuous, to have continuous first derivatives even through the classical forbidden region, and to form quantized vortices around each of the nodal points. A comparison is made between the present numerical calculations and the stationary wave approximation, and good agreement is found between both the Goos-Haenchen shifts and the reflection coefficients. The time-independent Schroedinger equation for real wavefunctions is reduced to solving a nonlinear first-order partial differential equation, leading to a generalization of the Prager-Hirschfelder perturbation scheme. Implications of the hydrodynamical formulation of quantum mechanics are discussed, and cases are cited where quantum and classical mechanical motions are identical.

  18. Localized basis functions and other computational improvements in variational nonorthogonal basis function methods for quantum mechanical scattering problems involving chemical reactions

    NASA Technical Reports Server (NTRS)

    Schwenke, David W.; Truhlar, Donald G.

    1990-01-01

    The Generalized Newton Variational Principle for 3D quantum mechanical reactive scattering is briefly reviewed. Then three techniques are described which improve the efficiency of the computations. First, the fact that the Hamiltonian is Hermitian is used to reduce the number of integrals computed, and then the properties of localized basis functions are exploited in order to eliminate redundant work in the integral evaluation. A new type of localized basis function with desirable properties is suggested. It is shown how partitioned matrices can be used with localized basis functions to reduce the amount of work required to handle the complex boundary conditions. The new techniques do not introduce any approximations into the calculations, so they may be used to obtain converged solutions of the Schroedinger equation.

  19. Polymer quantization of the Einstein-Rosen wormhole throat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kunstatter, Gabor; Peltola, Ari; Louko, Jorma

    2010-01-15

    We present a polymer quantization of spherically symmetric Einstein gravity in which the polymerized variable is the area of the Einstein-Rosen wormhole throat. In the classical polymer theory, the singularity is replaced by a bounce at a radius that depends on the polymerization scale. In the polymer quantum theory, we show numerically that the area spectrum is evenly spaced and in agreement with a Bohr-Sommerfeld semiclassical estimate, and this spectrum is not qualitatively sensitive to issues of factor ordering or boundary conditions except in the lowest few eigenvalues. In the limit of small polymerization scale we recover, within the numericalmore » accuracy, the area spectrum obtained from a Schroedinger quantization of the wormhole throat dynamics. The prospects of recovering from the polymer throat theory a full quantum-corrected spacetime are discussed.« less

  20. Students' Emergent Articulations of Statistical Models and Modeling in Making Informal Statistical Inferences

    ERIC Educational Resources Information Center

    Braham, Hana Manor; Ben-Zvi, Dani

    2017-01-01

    A fundamental aspect of statistical inference is representation of real-world data using statistical models. This article analyzes students' articulations of statistical models and modeling during their first steps in making informal statistical inferences. An integrated modeling approach (IMA) was designed and implemented to help students…

  1. Online Statistical Modeling (Regression Analysis) for Independent Responses

    NASA Astrophysics Data System (ADS)

    Made Tirta, I.; Anggraeni, Dian; Pandutama, Martinus

    2017-06-01

    Regression analysis (statistical analmodelling) are among statistical methods which are frequently needed in analyzing quantitative data, especially to model relationship between response and explanatory variables. Nowadays, statistical models have been developed into various directions to model various type and complex relationship of data. Rich varieties of advanced and recent statistical modelling are mostly available on open source software (one of them is R). However, these advanced statistical modelling, are not very friendly to novice R users, since they are based on programming script or command line interface. Our research aims to developed web interface (based on R and shiny), so that most recent and advanced statistical modelling are readily available, accessible and applicable on web. We have previously made interface in the form of e-tutorial for several modern and advanced statistical modelling on R especially for independent responses (including linear models/LM, generalized linier models/GLM, generalized additive model/GAM and generalized additive model for location scale and shape/GAMLSS). In this research we unified them in the form of data analysis, including model using Computer Intensive Statistics (Bootstrap and Markov Chain Monte Carlo/ MCMC). All are readily accessible on our online Virtual Statistics Laboratory. The web (interface) make the statistical modeling becomes easier to apply and easier to compare them in order to find the most appropriate model for the data.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caruso, M., E-mail: mcaruso@ugr.es; Fanchiotti, H.; Canal, C.A. Garcia

    An equivalence between the Schroedinger dynamics of a quantum system with a finite number of basis states and a classical dynamics is presented. The equivalence is an isomorphism that connects in univocal way both dynamical systems. We treat the particular case of neutral kaons and found a class of electric networks uniquely related to the kaon system finding the complete map between the matrix elements of the effective Hamiltonian of kaons and those elements of the classical dynamics of the networks. As a consequence, the relevant {epsilon} parameter that measures CP violation in the kaon system is completely determined inmore » terms of network parameters. - Highlights: > We provide a formal equivalence between classical and quantum dynamics. > We make use of the decomplexification concept. > Neutral kaon systems can be represented by electric circuits. > CP symmetry violation can be taken into account by non-reciprocity. > Non-reciprocity is represented by gyrators.« less

  3. Computational chemistry research

    NASA Technical Reports Server (NTRS)

    Levin, Eugene

    1987-01-01

    Task 41 is composed of two parts: (1) analysis and design studies related to the Numerical Aerodynamic Simulation (NAS) Extended Operating Configuration (EOC) and (2) computational chemistry. During the first half of 1987, Dr. Levin served as a member of an advanced system planning team to establish the requirements, goals, and principal technical characteristics of the NAS EOC. A paper entitled 'Scaling of Data Communications for an Advanced Supercomputer Network' is included. The high temperature transport properties (such as viscosity, thermal conductivity, etc.) of the major constituents of air (oxygen and nitrogen) were correctly determined. The results of prior ab initio computer solutions of the Schroedinger equation were combined with the best available experimental data to obtain complete interaction potentials for both neutral and ion-atom collision partners. These potentials were then used in a computer program to evaluate the collision cross-sections from which the transport properties could be determined. A paper entitled 'High Temperature Transport Properties of Air' is included.

  4. Phase-Covariant Cloning and EPR Correlations in Entangled Macroscopic Quantum Systems

    NASA Astrophysics Data System (ADS)

    de Martini, Francesco; Sciarrino, Fabio

    2007-03-01

    Theoretical and experimental results on the Quantum Injected Optical Parametric Amplification (QI-OPA) of optical qubits in the high gain regime are reported. The large size of the gain parameter in the collinear configuration, g = 4.5, allows the generation of EPR nonlocally correlated bunches containing about 4000 photons. The entanglement of the related Schroedinger Cat-State (SCS) is demonstrated as well as the establishment of Phase-Covariant quantum cloning. The cloning ``fidelity'' has been found to match the theoretical results. According to the original 1935 definition of the SCS, the overall apparatus establishes for the first time the nonlocal correlations between a microcopic spin (qubit) and a high J angular momentum i.e. a mesoscopic multiparticle system close to the classical limit. The results of the first experimental realization of the Herbert proposal for superluminal communication via nonlocality will be presented.

  5. Few-cycle attosecond pulse chirp effects on asymmetries in ionized electron momentum distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng Liangyou; Tan Fang; Gong Qihuang

    2009-07-15

    The momentum distributions of electrons ionized from H atoms by chirped few-cycle attosecond pulses are investigated by numerically solving the time-dependent Schroedinger equation. The central carrier frequency of the pulse is chosen to be 25 eV, which is well above the ionization threshold. The asymmetry (or difference) in the yield of electrons ionized along and opposite to the direction of linear laser polarization is found to be very sensitive to the pulse chirp (for pulses with fixed carrier-envelope phase), both for a fixed electron energy and for the energy-integrated yield. In particular, the larger the pulse chirp, the larger themore » number of times the asymmetry changes sign as a function of ionized electron energy. For a fixed chirp, the ionized electron asymmetry is found to be sensitive also to the carrier-envelope phase of the few-cycle pulse.« less

  6. Identifying decohering paths in closed quantum systems

    NASA Technical Reports Server (NTRS)

    Albrecht, Andreas

    1990-01-01

    A specific proposal is discussed for how to identify decohering paths in a wavefunction of the universe. The emphasis is on determining the correlations among subsystems and then considering how these correlations evolve. The proposal is similar to earlier ideas of Schroedinger and of Zeh, but in other ways it is closer to the decoherence functional of Griffiths, Omnes, and Gell-Mann and Hartle. There are interesting differences with each of these which are discussed. Once a given coarse-graining is chosen, the candidate paths are fixed in this scheme, and a single well defined number measures the degree of decoherence for each path. The normal probability sum rules are exactly obeyed (instantaneously) by these paths regardless of the level of decoherence. Also briefly discussed is how one might quantify some other aspects of classicality. The important role that concrete calculations play in testing this and other proposals is stressed.

  7. Wavepacket propagation using time-sliced semiclassical initial value methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wallace, Brett B.; Reimers, Jeffrey R.; School of Chemistry, University of Sydney, Sydney NSW 2006

    2004-12-22

    A new semiclassical initial value representation (SC-IVR) propagator and a SC-IVR propagator originally introduced by Kay [J. Chem. Phys. 100, 4432 (1994)], are investigated for use in the split-operator method for solving the time-dependent Schroedinger equation. It is shown that the SC-IVR propagators can be derived from a procedure involving modified Filinov filtering of the Van Vleck expression for the semiclassical propagator. The two SC-IVR propagators have been selected for investigation because they avoid the need to perform a coherent state basis set expansion that is necessary in other time-slicing propagation schemes. An efficient scheme for solving the propagators ismore » introduced and can be considered to be a semiclassical form of the effective propagators of Makri [Chem. Phys. Lett. 159, 489 (1989)]. Results from applications to a one-dimensional, two-dimensional, and three-dimensional Hamiltonian for a double-well potential are presented.« less

  8. Vibrational Frequencies and Spectroscopic Constants for 1(sup 3)A' HNC and 1(sup 3)A' HOC+ from High-Accuracy Quartic Force Fields

    NASA Technical Reports Server (NTRS)

    Fortenberry, Ryan C.; Crawford, T. Daniel; Lee, Timothy J.

    2014-01-01

    The spectroscopic constants and vibrational frequencies for the 1(sup 3)A' states of HNC, DNC, HOC+, and DOC+ are computed and discussed in this work. The reliable CcCR quartic force field based on high-level coupled cluster ab initio quantum chemical computations is exclusively utilized to provide the anharmonic potential. Then, second order vibrational perturbation theory and vibrational configuration interaction methods are employed to treat the nuclear Schroedinger equation. Second-order perturbation theory is also employed to provide spectroscopic data for all molecules examined. The relationship between these molecules and the corresponding 1(sup 3)A' HCN and HCO+ isomers is further developed here. These data are applicable to laboratory studies involving formation of HNC and HOC+ as well as astronomical observations of chemically active astrophysical environments.

  9. Bright and dark solitons in the normal dispersion regime of inhomogeneous optical fibers: Soliton interaction and soliton control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu Wenjun; Tian Bo, E-mail: tian.bupt@yahoo.com.c; State Key Laboratory of Software Development Environment, Beijing University of Aeronautics and Astronautics, Beijing 100191

    2010-08-15

    Symbolically investigated in this paper is a nonlinear Schroedinger equation with the varying dispersion and nonlinearity for the propagation of optical pulses in the normal dispersion regime of inhomogeneous optical fibers. With the aid of the Hirota method, analytic one- and two-soliton solutions are obtained. Relevant properties of physical and optical interest are illustrated. Different from the previous results, both the bright and dark solitons are hereby derived in the normal dispersion regime of the inhomogeneous optical fibers. Moreover, different dispersion profiles of the dispersion-decreasing fibers can be used to realize the soliton control. Finally, soliton interaction is discussed withmore » the soliton control confirmed to have no influence on the interaction. The results might be of certain value for the study of the signal generator and soliton control.« less

  10. Kinetic treatment of nonlinear magnetized plasma motions - General geometry and parallel waves

    NASA Technical Reports Server (NTRS)

    Khabibrakhmanov, I. KH.; Galinskii, V. L.; Verheest, F.

    1992-01-01

    The expansion of kinetic equations in the limit of a strong magnetic field is presented. This gives a natural description of the motions of magnetized plasmas, which are slow compared to the particle gyroperiods and gyroradii. Although the approach is 3D, this very general result is used only to focus on the parallel propagation of nonlinear Alfven waves. The derivative nonlinear Schroedinger-like equation is obtained. Two new terms occur compared to earlier treatments, a nonlinear term proportional to the heat flux along the magnetic field line and a higher-order dispersive term. It is shown that kinetic description avoids the singularities occurring in magnetohydrodynamic or multifluid approaches, which correspond to the degenerate case of sound speeds equal to the Alfven speed, and that parallel heat fluxes cannot be neglected, not even in the case of low parallel plasma beta. A truly stationary soliton solution is derived.

  11. Adiabatic Berry phase in an atom-molecule conversion system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu Libin; Center for Applied Physics and Technology, Peking University, Beijing 100084; Liu Jie, E-mail: liu_jie@iapcm.ac.c

    2010-11-15

    We investigate the Berry phase of adiabatic quantum evolution in the atom-molecule conversion system that is governed by a nonlinear Schroedinger equation. We find that the Berry phase consists of two parts: the usual Berry connection term and a novel term from the nonlinearity brought forth by the atom-molecule coupling. The total geometric phase can be still viewed as the flux of the magnetic field of a monopole through the surface enclosed by a closed path in parameter space. The charge of the monopole, however, is found to be one third of the elementary charge of the usual quantized monopole.more » We also derive the classical Hannay angle of a geometric nature associated with the adiabatic evolution. It exactly equals minus Berry phase, indicating a novel connection between Berry phase and Hannay angle in contrast to the usual derivative form.« less

  12. Carrier-envelope phase-dependent field-free molecular orientation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shu Chuancun; Yuan Kaijun; Hu Wenhui

    2009-07-15

    We present a strategy to achieve carrier-envelope phase-dependent field-free molecular orientation with the use of carrier-envelope phase (CEP) stabilization and asymmetric few-cycle terahertz (THz) laser pulses. The calculations are performed on the LiH molecule by an exact solution of the full time-dependent Schroedinger equation including both the vibrational and the rotational degrees of freedom. Our calculations show that an efficient field-free molecular orientation can be obtained even at considerable temperatures. Moreover, we find a simple dependence of the field-free orientation on the CEP, which implies that the CEP becomes an important parameter for control of molecular orientation. More importantly, themore » realization of this scenario is appealing based on the fact that the intense few-cycle THz pulse with duration as short as a few optical cycles is available as a research tool.« less

  13. Life is physics and chemistry and communication.

    PubMed

    Witzany, Guenther

    2015-04-01

    Manfred Eigen extended Erwin Schroedinger's concept of "life is physics and chemistry" through the introduction of information theory and cybernetic systems theory into "life is physics and chemistry and information." Based on this assumption, Eigen developed the concepts of quasispecies and hypercycles, which have been dominant in molecular biology and virology ever since. He insisted that the genetic code is not just used metaphorically: it represents a real natural language. However, the basics of scientific knowledge changed dramatically within the second half of the 20th century. Unfortunately, Eigen ignored the results of the philosophy of science discourse on essential features of natural languages and codes: a natural language or code emerges from populations of living agents that communicate. This contribution will look at some of the highlights of this historical development and the results relevant for biological theories about life. © 2014 New York Academy of Sciences.

  14. Number-phase minimum-uncertainty state with reduced number uncertainty in a Kerr nonlinear interferometer

    NASA Astrophysics Data System (ADS)

    Kitagawa, M.; Yamamoto, Y.

    1987-11-01

    An alternative scheme for generating amplitude-squeezed states of photons based on unitary evolution which can properly be described by quantum mechanics is presented. This scheme is a nonlinear Mach-Zehnder interferometer containing an optical Kerr medium. The quasi-probability density (QPD) and photon-number distribution of the output field are calculated, and it is demonstrated that the reduced photon-number uncertainty and enhanced phase uncertainty maintain the minimum-uncertainty product. A self-phase-modulation of the single-mode quantized field in the Kerr medium is described based on localized operators. The spatial evolution of the state is demonstrated by QPD in the Schroedinger picture. It is shown that photon-number variance can be reduced to a level far below the limit for an ordinary squeezed state, and that the state prepared using this scheme remains a number-phase minimum-uncertainty state until the maximum reduction of number fluctuations is surpassed.

  15. An R2 statistic for fixed effects in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Muller, Keith E; Wolfinger, Russell D; Qaqish, Bahjat F; Schabenberger, Oliver

    2008-12-20

    Statisticians most often use the linear mixed model to analyze Gaussian longitudinal data. The value and familiarity of the R(2) statistic in the linear univariate model naturally creates great interest in extending it to the linear mixed model. We define and describe how to compute a model R(2) statistic for the linear mixed model by using only a single model. The proposed R(2) statistic measures multivariate association between the repeated outcomes and the fixed effects in the linear mixed model. The R(2) statistic arises as a 1-1 function of an appropriate F statistic for testing all fixed effects (except typically the intercept) in a full model. The statistic compares the full model with a null model with all fixed effects deleted (except typically the intercept) while retaining exactly the same covariance structure. Furthermore, the R(2) statistic leads immediately to a natural definition of a partial R(2) statistic. A mixed model in which ethnicity gives a very small p-value as a longitudinal predictor of blood pressure (BP) compellingly illustrates the value of the statistic. In sharp contrast to the extreme p-value, a very small R(2) , a measure of statistical and scientific importance, indicates that ethnicity has an almost negligible association with the repeated BP outcomes for the study.

  16. Comparing and combining process-based crop models and statistical models with some implications for climate change

    NASA Astrophysics Data System (ADS)

    Roberts, Michael J.; Braun, Noah O.; Sinclair, Thomas R.; Lobell, David B.; Schlenker, Wolfram

    2017-09-01

    We compare predictions of a simple process-based crop model (Soltani and Sinclair 2012), a simple statistical model (Schlenker and Roberts 2009), and a combination of both models to actual maize yields on a large, representative sample of farmer-managed fields in the Corn Belt region of the United States. After statistical post-model calibration, the process model (Simple Simulation Model, or SSM) predicts actual outcomes slightly better than the statistical model, but the combined model performs significantly better than either model. The SSM, statistical model and combined model all show similar relationships with precipitation, while the SSM better accounts for temporal patterns of precipitation, vapor pressure deficit and solar radiation. The statistical and combined models show a more negative impact associated with extreme heat for which the process model does not account. Due to the extreme heat effect, predicted impacts under uniform climate change scenarios are considerably more severe for the statistical and combined models than for the process-based model.

  17. Development of a statistical model for cervical cancer cell death with irreversible electroporation in vitro.

    PubMed

    Yang, Yongji; Moser, Michael A J; Zhang, Edwin; Zhang, Wenjun; Zhang, Bing

    2018-01-01

    The aim of this study was to develop a statistical model for cell death by irreversible electroporation (IRE) and to show that the statistic model is more accurate than the electric field threshold model in the literature using cervical cancer cells in vitro. HeLa cell line was cultured and treated with different IRE protocols in order to obtain data for modeling the statistical relationship between the cell death and pulse-setting parameters. In total, 340 in vitro experiments were performed with a commercial IRE pulse system, including a pulse generator and an electric cuvette. Trypan blue staining technique was used to evaluate cell death after 4 hours of incubation following IRE treatment. Peleg-Fermi model was used in the study to build the statistical relationship using the cell viability data obtained from the in vitro experiments. A finite element model of IRE for the electric field distribution was also built. Comparison of ablation zones between the statistical model and electric threshold model (drawn from the finite element model) was used to show the accuracy of the proposed statistical model in the description of the ablation zone and its applicability in different pulse-setting parameters. The statistical models describing the relationships between HeLa cell death and pulse length and the number of pulses, respectively, were built. The values of the curve fitting parameters were obtained using the Peleg-Fermi model for the treatment of cervical cancer with IRE. The difference in the ablation zone between the statistical model and the electric threshold model was also illustrated to show the accuracy of the proposed statistical model in the representation of ablation zone in IRE. This study concluded that: (1) the proposed statistical model accurately described the ablation zone of IRE with cervical cancer cells, and was more accurate compared with the electric field model; (2) the proposed statistical model was able to estimate the value of electric field threshold for the computer simulation of IRE in the treatment of cervical cancer; and (3) the proposed statistical model was able to express the change in ablation zone with the change in pulse-setting parameters.

  18. Stan: Statistical inference

    NASA Astrophysics Data System (ADS)

    Stan Development Team

    2018-01-01

    Stan facilitates statistical inference at the frontiers of applied statistics and provides both a modeling language for specifying complex statistical models and a library of statistical algorithms for computing inferences with those models. These components are exposed through interfaces in environments such as R, Python, and the command line.

  19. A two-component rain model for the prediction of attenuation statistics

    NASA Technical Reports Server (NTRS)

    Crane, R. K.

    1982-01-01

    A two-component rain model has been developed for calculating attenuation statistics. In contrast to most other attenuation prediction models, the two-component model calculates the occurrence probability for volume cells or debris attenuation events. The model performed significantly better than the International Radio Consultative Committee model when used for predictions on earth-satellite paths. It is expected that the model will have applications in modeling the joint statistics required for space diversity system design, the statistics of interference due to rain scatter at attenuating frequencies, and the duration statistics for attenuation events.

  20. 12 CFR Appendix A to Subpart A of... - Appendix A to Subpart A of Part 327

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... pricing multipliers are derived from: • A model (the Statistical Model) that estimates the probability..., which is four basis points higher than the minimum rate. II. The Statistical Model The Statistical Model... to 1997. As a result, and as described in Table A.1, the Statistical Model is estimated using a...

  1. A global goodness-of-fit statistic for Cox regression models.

    PubMed

    Parzen, M; Lipsitz, S R

    1999-06-01

    In this paper, a global goodness-of-fit test statistic for a Cox regression model, which has an approximate chi-squared distribution when the model has been correctly specified, is proposed. Our goodness-of-fit statistic is global and has power to detect if interactions or higher order powers of covariates in the model are needed. The proposed statistic is similar to the Hosmer and Lemeshow (1980, Communications in Statistics A10, 1043-1069) goodness-of-fit statistic for binary data as well as Schoenfeld's (1980, Biometrika 67, 145-153) statistic for the Cox model. The methods are illustrated using data from a Mayo Clinic trial in primary billiary cirrhosis of the liver (Fleming and Harrington, 1991, Counting Processes and Survival Analysis), in which the outcome is the time until liver transplantation or death. The are 17 possible covariates. Two Cox proportional hazards models are fit to the data, and the proposed goodness-of-fit statistic is applied to the fitted models.

  2. Central Limit Theorem for Exponentially Quasi-local Statistics of Spin Models on Cayley Graphs

    NASA Astrophysics Data System (ADS)

    Reddy, Tulasi Ram; Vadlamani, Sreekar; Yogeshwaran, D.

    2018-04-01

    Central limit theorems for linear statistics of lattice random fields (including spin models) are usually proven under suitable mixing conditions or quasi-associativity. Many interesting examples of spin models do not satisfy mixing conditions, and on the other hand, it does not seem easy to show central limit theorem for local statistics via quasi-associativity. In this work, we prove general central limit theorems for local statistics and exponentially quasi-local statistics of spin models on discrete Cayley graphs with polynomial growth. Further, we supplement these results by proving similar central limit theorems for random fields on discrete Cayley graphs taking values in a countable space, but under the stronger assumptions of α -mixing (for local statistics) and exponential α -mixing (for exponentially quasi-local statistics). All our central limit theorems assume a suitable variance lower bound like many others in the literature. We illustrate our general central limit theorem with specific examples of lattice spin models and statistics arising in computational topology, statistical physics and random networks. Examples of clustering spin models include quasi-associated spin models with fast decaying covariances like the off-critical Ising model, level sets of Gaussian random fields with fast decaying covariances like the massive Gaussian free field and determinantal point processes with fast decaying kernels. Examples of local statistics include intrinsic volumes, face counts, component counts of random cubical complexes while exponentially quasi-local statistics include nearest neighbour distances in spin models and Betti numbers of sub-critical random cubical complexes.

  3. Visualization of the variability of 3D statistical shape models by animation.

    PubMed

    Lamecker, Hans; Seebass, Martin; Lange, Thomas; Hege, Hans-Christian; Deuflhard, Peter

    2004-01-01

    Models of the 3D shape of anatomical objects and the knowledge about their statistical variability are of great benefit in many computer assisted medical applications like images analysis, therapy or surgery planning. Statistical model of shapes have successfully been applied to automate the task of image segmentation. The generation of 3D statistical shape models requires the identification of corresponding points on two shapes. This remains a difficult problem, especially for shapes of complicated topology. In order to interpret and validate variations encoded in a statistical shape model, visual inspection is of great importance. This work describes the generation and interpretation of statistical shape models of the liver and the pelvic bone.

  4. Hyperparameterization of soil moisture statistical models for North America with Ensemble Learning Models (Elm)

    NASA Astrophysics Data System (ADS)

    Steinberg, P. D.; Brener, G.; Duffy, D.; Nearing, G. S.; Pelissier, C.

    2017-12-01

    Hyperparameterization, of statistical models, i.e. automated model scoring and selection, such as evolutionary algorithms, grid searches, and randomized searches, can improve forecast model skill by reducing errors associated with model parameterization, model structure, and statistical properties of training data. Ensemble Learning Models (Elm), and the related Earthio package, provide a flexible interface for automating the selection of parameters and model structure for machine learning models common in climate science and land cover classification, offering convenient tools for loading NetCDF, HDF, Grib, or GeoTiff files, decomposition methods like PCA and manifold learning, and parallel training and prediction with unsupervised and supervised classification, clustering, and regression estimators. Continuum Analytics is using Elm to experiment with statistical soil moisture forecasting based on meteorological forcing data from NASA's North American Land Data Assimilation System (NLDAS). There Elm is using the NSGA-2 multiobjective optimization algorithm for optimizing statistical preprocessing of forcing data to improve goodness-of-fit for statistical models (i.e. feature engineering). This presentation will discuss Elm and its components, including dask (distributed task scheduling), xarray (data structures for n-dimensional arrays), and scikit-learn (statistical preprocessing, clustering, classification, regression), and it will show how NSGA-2 is being used for automate selection of soil moisture forecast statistical models for North America.

  5. Predicting Statistical Response and Extreme Events in Uncertainty Quantification through Reduced-Order Models

    NASA Astrophysics Data System (ADS)

    Qi, D.; Majda, A.

    2017-12-01

    A low-dimensional reduced-order statistical closure model is developed for quantifying the uncertainty in statistical sensitivity and intermittency in principal model directions with largest variability in high-dimensional turbulent system and turbulent transport models. Imperfect model sensitivity is improved through a recent mathematical strategy for calibrating model errors in a training phase, where information theory and linear statistical response theory are combined in a systematic fashion to achieve the optimal model performance. The idea in the reduced-order method is from a self-consistent mathematical framework for general systems with quadratic nonlinearity, where crucial high-order statistics are approximated by a systematic model calibration procedure. Model efficiency is improved through additional damping and noise corrections to replace the expensive energy-conserving nonlinear interactions. Model errors due to the imperfect nonlinear approximation are corrected by tuning the model parameters using linear response theory with an information metric in a training phase before prediction. A statistical energy principle is adopted to introduce a global scaling factor in characterizing the higher-order moments in a consistent way to improve model sensitivity. Stringent models of barotropic and baroclinic turbulence are used to display the feasibility of the reduced-order methods. Principal statistical responses in mean and variance can be captured by the reduced-order models with accuracy and efficiency. Besides, the reduced-order models are also used to capture crucial passive tracer field that is advected by the baroclinic turbulent flow. It is demonstrated that crucial principal statistical quantities like the tracer spectrum and fat-tails in the tracer probability density functions in the most important large scales can be captured efficiently with accuracy using the reduced-order tracer model in various dynamical regimes of the flow field with distinct statistical structures.

  6. Neural Systems with Numerically Matched Input-Output Statistic: Isotonic Bivariate Statistical Modeling

    PubMed Central

    Fiori, Simone

    2007-01-01

    Bivariate statistical modeling from incomplete data is a useful statistical tool that allows to discover the model underlying two data sets when the data in the two sets do not correspond in size nor in ordering. Such situation may occur when the sizes of the two data sets do not match (i.e., there are “holes” in the data) or when the data sets have been acquired independently. Also, statistical modeling is useful when the amount of available data is enough to show relevant statistical features of the phenomenon underlying the data. We propose to tackle the problem of statistical modeling via a neural (nonlinear) system that is able to match its input-output statistic to the statistic of the available data sets. A key point of the new implementation proposed here is that it is based on look-up-table (LUT) neural systems, which guarantee a computationally advantageous way of implementing neural systems. A number of numerical experiments, performed on both synthetic and real-world data sets, illustrate the features of the proposed modeling procedure. PMID:18566641

  7. Physics-based statistical model and simulation method of RF propagation in urban environments

    DOEpatents

    Pao, Hsueh-Yuan; Dvorak, Steven L.

    2010-09-14

    A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.

  8. A comparison of large-scale climate signals and the North American Multi-Model Ensemble (NMME) for drought prediction in China

    NASA Astrophysics Data System (ADS)

    Xu, Lei; Chen, Nengcheng; Zhang, Xiang

    2018-02-01

    Drought is an extreme natural disaster that can lead to huge socioeconomic losses. Drought prediction ahead of months is helpful for early drought warning and preparations. In this study, we developed a statistical model, two weighted dynamic models and a statistical-dynamic (hybrid) model for 1-6 month lead drought prediction in China. Specifically, statistical component refers to climate signals weighting by support vector regression (SVR), dynamic components consist of the ensemble mean (EM) and Bayesian model averaging (BMA) of the North American Multi-Model Ensemble (NMME) climatic models, and the hybrid part denotes a combination of statistical and dynamic components by assigning weights based on their historical performances. The results indicate that the statistical and hybrid models show better rainfall predictions than NMME-EM and NMME-BMA models, which have good predictability only in southern China. In the 2011 China winter-spring drought event, the statistical model well predicted the spatial extent and severity of drought nationwide, although the severity was underestimated in the mid-lower reaches of Yangtze River (MLRYR) region. The NMME-EM and NMME-BMA models largely overestimated rainfall in northern and western China in 2011 drought. In the 2013 China summer drought, the NMME-EM model forecasted the drought extent and severity in eastern China well, while the statistical and hybrid models falsely detected negative precipitation anomaly (NPA) in some areas. Model ensembles such as multiple statistical approaches, multiple dynamic models or multiple hybrid models for drought predictions were highlighted. These conclusions may be helpful for drought prediction and early drought warnings in China.

  9. Some steps toward a central theory of ecosystem dynamics.

    PubMed

    Ulanowicz, Robert E

    2003-12-01

    Ecology is said by many to suffer for want of a central theory, such as Newton's laws of motion provide for classical mechanics or Schroedinger's wave equation provides for quantum physics. From among a plurality of contending laws to govern ecosystem behavior, the principle of increasing ascendency shows some early promise of being able to address the major questions asked of a theory of ecosystems, including, "How do organisms come to be distributed in time and space?, what accounts for the log-normal distribution of species numbers?, and how is the diversity of ecosystems related to their stability, resilience and persistence?" While some progress has been made in applying the concept of ascendency to the first issue, more work is needed to articulate exactly how it relates to the latter two. Accordingly, seven theoretical tasks are suggested that could help to establish these connections and to promote further consideration of the ascendency principle as the kernel of a theory of ecosystems.

  10. Calculation and manipulation of the chirp rates of high-order harmonics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murakami, M.; Mauritsson, J.; Schafer, K.J.

    2005-01-01

    We calculate the linear chirp rates of high-order harmonics in argon, generated by intense, 810 nm laser pulses, and explore the dependence of the chirp rate on harmonic order, driving laser intensity, and pulse duration. By using a time-frequency representation of the harmonic fields we can identify several different linear chirp contributions to the plateau harmonics. Our results, which are based on numerical integration of the time-dependent Schroedinger equation, are in good agreement with the adiabatic predictions of the strong field approximation for the chirp rates. Extending the theoretical analysis in the recent paper by Mauritsson et al. [Phys. Rev.more » A 70, 021801(R) (2004)], we also manipulate the chirp rates of the harmonics by adding a chirp to the driving pulse. We show that the chirp rate for harmonic q is given by the sum of the intrinsic chirp rate, which is determined by the new duration and peak intensity of the chirped driving pulse, and q times the external chirp rate.« less

  11. Tunneling dynamics in relativistic and nonrelativistic wave equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delgado, F.; Muga, J. G.; Ruschhaupt, A.

    2003-09-01

    We obtain the solution of a relativistic wave equation and compare it with the solution of the Schroedinger equation for a source with a sharp onset and excitation frequencies below cutoff. A scaling of position and time reduces to a single case all the (below cutoff) nonrelativistic solutions, but no such simplification holds for the relativistic equation, so that qualitatively different ''shallow'' and ''deep'' tunneling regimes may be identified relativistically. The nonrelativistic forerunner at a position beyond the penetration length of the asymptotic stationary wave does not tunnel; nevertheless, it arrives at the traversal (semiclassical or Buettiker-Landauer) time {tau}. Themore » corresponding relativistic forerunner is more complex: it oscillates due to the interference between two saddle-point contributions and may be characterized by two times for the arrival of the maxima of lower and upper envelopes. There is in addition an earlier relativistic forerunner, right after the causal front, which does tunnel. Within the penetration length, tunneling is more robust for the precursors of the relativistic equation.« less

  12. Alternative descriptions of wave and particle aspects of the harmonic oscillator

    NASA Technical Reports Server (NTRS)

    Schuch, Dieter

    1993-01-01

    The dynamical properties of the wave and particle aspects of the harmonic oscillator can be studied with the help of the time-dependent Schroedinger equation (SE). Especially the time-dependence of maximum and width of Gaussian wave packet solutions allow to show the evolution and connections of those two complementary aspects. The investigation of the relations between the equations describing wave and particle aspects leads to an alternative description of the considered systems. This can be achieved by means of a Newtonian equation for a complex variable in connection with a conservation law for a nonclassical angular momentum-type quantity. With the help of this complex variable, it is also possible to develop a Hamiltonian formalism for the wave aspect contained in the SE, which allows to describe the dynamics of the position and momentum uncertainties. In this case the Hamiltonian function is equivalent to the difference between the mean value of the Hamiltonian operator and the classical Hamiltonian function.

  13. Entanglement in Quantum-Classical Hybrid

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2011-01-01

    It is noted that the phenomenon of entanglement is not a prerogative of quantum systems, but also occurs in other, non-classical systems such as quantum-classical hybrids, and covers the concept of entanglement as a special type of global constraint imposed upon a broad class of dynamical systems. Application of hybrid systems for physics of life, as well as for quantum-inspired computing, has been outlined. In representing the Schroedinger equation in the Madelung form, there is feedback from the Liouville equation to the Hamilton-Jacobi equation in the form of the quantum potential. Preserving the same topology, the innovators replaced the quantum potential with other types of feedback, and investigated the property of these hybrid systems. A function of probability density has been introduced. Non-locality associated with a global geometrical constraint that leads to an entanglement effect was demonstrated. Despite such a quantum like characteristic, the hybrid can be of classical scale and all the measurements can be performed classically. This new emergence of entanglement sheds light on the concept of non-locality in physics.

  14. Tritium β decay in chiral effective field theory

    DOE PAGES

    Baroni, A.; Girlanda, L.; Kievsky, A.; ...

    2016-08-18

    We evaluate the Fermi and Gamow-Teller (GT) matrix elements in tritiummore » $$\\beta$$-decay by including in the charge-changing weak current the corrections up to one loop recently derived in nuclear chiral effective field theory ($$\\chi$$ EFT). The trinucleon wave functions are obtained from hyperspherical-harmonics solutions of the Schroedinger equation with two- and three-nucleon potentials corresponding to either $$\\chi$$ EFT (the N3LO/N2LO combination) or meson-exchange phenomenology (the AV18/UIX combination). We find that contributions due to loop corrections in the axial current are, in relative terms, as large as (and in some cases, dominate) those from one-pion exchange, which nominally occur at lower order in the power counting. Furthermore, we also provide values for the low-energy constants multiplying the contact axial current and three-nucleon potential, required to reproduce the experimental GT matrix element and trinucleon binding energies in the N3LO/N2LO and AV18/UIX calculations.« less

  15. Photoassociation dynamics driven by a modulated two-color laser field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Wei; Zhao Zeyu; Xie Ting

    2011-11-15

    Photoassociation (PA) dynamics of ultracold cesium atoms steered by a modulated two-color laser field E(t)=E{sub 0}f(t)cos((2{pi}/T{sub p})-{phi})cos({omega}{sub L}t) is investigated theoretically by numerically solving the time-dependent Schroedinger equation. The PA dynamics is sensitive to the phase of envelope (POE) {phi} and the period of the envelope T{sub p}, which indicates that it can be controlled by varying POE {phi} and period T{sub p}. Moreover, we introduce the time- and frequency-resolved spectrum to illustrate how the POE {phi} and the period T{sub p} influence the intensity distribution of the modulated laser pulse and hence change the time-dependent population distribution of photoassociatedmore » molecules. When the Gaussian envelope contains a few oscillations, the PA efficiency is also dependent on POE {phi}. The modulated two-color laser field is available in the current experiment based on laser mode-lock technology.« less

  16. Solution of two-body relativistic bound state equations with confining plus Coulomb interactions

    NASA Technical Reports Server (NTRS)

    Maung, Khin Maung; Kahana, David E.; Norbury, John W.

    1992-01-01

    Studies of meson spectroscopy have often employed a nonrelativistic Coulomb plus Linear Confining potential in position space. However, because the quarks in mesons move at an appreciable fraction of the speed of light, it is necessary to use a relativistic treatment of the bound state problem. Such a treatment is most easily carried out in momentum space. However, the position space Linear and Coulomb potentials lead to singular kernels in momentum space. Using a subtraction procedure we show how to remove these singularities exactly and thereby solve the Schroedinger equation in momentum space for all partial waves. Furthermore, we generalize the Linear and Coulomb potentials to relativistic kernels in four dimensional momentum space. Again we use a subtraction procedure to remove the relativistic singularities exactly for all partial waves. This enables us to solve three dimensional reductions of the Bethe-Salpeter equation. We solve six such equations for Coulomb plus Confining interactions for all partial waves.

  17. Dominant partition method. [based on a wave function formalism

    NASA Technical Reports Server (NTRS)

    Dixon, R. M.; Redish, E. F.

    1979-01-01

    By use of the L'Huillier, Redish, and Tandy (LRT) wave function formalism, a partially connected method, the dominant partition method (DPM) is developed for obtaining few body reductions of the many body problem in the LRT and Bencze, Redish, and Sloan (BRS) formalisms. The DPM maps the many body problem to a fewer body one by using the criterion that the truncated formalism must be such that consistency with the full Schroedinger equation is preserved. The DPM is based on a class of new forms for the irreducible cluster potential, which is introduced in the LRT formalism. Connectivity is maintained with respect to all partitions containing a given partition, which is referred to as the dominant partition. Degrees of freedom corresponding to the breakup of one or more of the clusters of the dominant partition are treated in a disconnected manner. This approach for simplifying the complicated BRS equations is appropriate for physical problems where a few body reaction mechanism prevails.

  18. Delay time in a single barrier for a movable quantum shutter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez, Alberto

    2010-05-15

    The transient solution and delay time for a {delta} potential scatterer with a movable quantum shutter is calculated by solving analytically the time-dependent Schroedinger equation. The delay time is analyzed as a function of the distance between the shutter and the potential barrier and also as a function of the distance between the potential barrier and the detector. In both cases, it is found that the delay time exhibits a dynamical behavior and that it tends to a saturation value {Delta}t{sub sat} in the limit of very short distances, which represents the maximum delay produced by the potential barrier nearmore » the interaction region. The phase time {tau}{sub {theta},} on the other hand, is not an appropriate time scale for measuring the time delay near the interaction region, except if the shutter is moved far away from the potential. The role played by the antibound state of the system on the behavior of the delay time is also discussed.« less

  19. Strategies for Reduced-Order Models in Uncertainty Quantification of Complex Turbulent Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Qi, Di

    Turbulent dynamical systems are ubiquitous in science and engineering. Uncertainty quantification (UQ) in turbulent dynamical systems is a grand challenge where the goal is to obtain statistical estimates for key physical quantities. In the development of a proper UQ scheme for systems characterized by both a high-dimensional phase space and a large number of instabilities, significant model errors compared with the true natural signal are always unavoidable due to both the imperfect understanding of the underlying physical processes and the limited computational resources available. One central issue in contemporary research is the development of a systematic methodology for reduced order models that can recover the crucial features both with model fidelity in statistical equilibrium and with model sensitivity in response to perturbations. In the first part, we discuss a general mathematical framework to construct statistically accurate reduced-order models that have skill in capturing the statistical variability in the principal directions of a general class of complex systems with quadratic nonlinearity. A systematic hierarchy of simple statistical closure schemes, which are built through new global statistical energy conservation principles combined with statistical equilibrium fidelity, are designed and tested for UQ of these problems. Second, the capacity of imperfect low-order stochastic approximations to model extreme events in a passive scalar field advected by turbulent flows is investigated. The effects in complicated flow systems are considered including strong nonlinear and non-Gaussian interactions, and much simpler and cheaper imperfect models with model error are constructed to capture the crucial statistical features in the stationary tracer field. Several mathematical ideas are introduced to improve the prediction skill of the imperfect reduced-order models. Most importantly, empirical information theory and statistical linear response theory are applied in the training phase for calibrating model errors to achieve optimal imperfect model parameters; and total statistical energy dynamics are introduced to improve the model sensitivity in the prediction phase especially when strong external perturbations are exerted. The validity of reduced-order models for predicting statistical responses and intermittency is demonstrated on a series of instructive models with increasing complexity, including the stochastic triad model, the Lorenz '96 model, and models for barotropic and baroclinic turbulence. The skillful low-order modeling methods developed here should also be useful for other applications such as efficient algorithms for data assimilation.

  20. Helping Students Develop Statistical Reasoning: Implementing a Statistical Reasoning Learning Environment

    ERIC Educational Resources Information Center

    Garfield, Joan; Ben-Zvi, Dani

    2009-01-01

    This article describes a model for an interactive, introductory secondary- or tertiary-level statistics course that is designed to develop students' statistical reasoning. This model is called a "Statistical Reasoning Learning Environment" and is built on the constructivist theory of learning.

  1. Statistical field theory of futures commodity prices

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Yu, Miao

    2018-02-01

    The statistical theory of commodity prices has been formulated by Baaquie (2013). Further empirical studies of single (Baaquie et al., 2015) and multiple commodity prices (Baaquie et al., 2016) have provided strong evidence in support the primary assumptions of the statistical formulation. In this paper, the model for spot prices (Baaquie, 2013) is extended to model futures commodity prices using a statistical field theory of futures commodity prices. The futures prices are modeled as a two dimensional statistical field and a nonlinear Lagrangian is postulated. Empirical studies provide clear evidence in support of the model, with many nontrivial features of the model finding unexpected support from market data.

  2. Population activity statistics dissect subthreshold and spiking variability in V1.

    PubMed

    Bányai, Mihály; Koman, Zsombor; Orbán, Gergő

    2017-07-01

    Response variability, as measured by fluctuating responses upon repeated performance of trials, is a major component of neural responses, and its characterization is key to interpret high dimensional population recordings. Response variability and covariability display predictable changes upon changes in stimulus and cognitive or behavioral state, providing an opportunity to test the predictive power of models of neural variability. Still, there is little agreement on which model to use as a building block for population-level analyses, and models of variability are often treated as a subject of choice. We investigate two competing models, the doubly stochastic Poisson (DSP) model assuming stochasticity at spike generation, and the rectified Gaussian (RG) model tracing variability back to membrane potential variance, to analyze stimulus-dependent modulation of both single-neuron and pairwise response statistics. Using a pair of model neurons, we demonstrate that the two models predict similar single-cell statistics. However, DSP and RG models have contradicting predictions on the joint statistics of spiking responses. To test the models against data, we build a population model to simulate stimulus change-related modulations in pairwise response statistics. We use single-unit data from the primary visual cortex (V1) of monkeys to show that while model predictions for variance are qualitatively similar to experimental data, only the RG model's predictions are compatible with joint statistics. These results suggest that models using Poisson-like variability might fail to capture important properties of response statistics. We argue that membrane potential-level modeling of stochasticity provides an efficient strategy to model correlations. NEW & NOTEWORTHY Neural variability and covariability are puzzling aspects of cortical computations. For efficient decoding and prediction, models of information encoding in neural populations hinge on an appropriate model of variability. Our work shows that stimulus-dependent changes in pairwise but not in single-cell statistics can differentiate between two widely used models of neuronal variability. Contrasting model predictions with neuronal data provides hints on the noise sources in spiking and provides constraints on statistical models of population activity. Copyright © 2017 the American Physiological Society.

  3. Selecting Summary Statistics in Approximate Bayesian Computation for Calibrating Stochastic Models

    PubMed Central

    Burr, Tom

    2013-01-01

    Approximate Bayesian computation (ABC) is an approach for using measurement data to calibrate stochastic computer models, which are common in biology applications. ABC is becoming the “go-to” option when the data and/or parameter dimension is large because it relies on user-chosen summary statistics rather than the full data and is therefore computationally feasible. One technical challenge with ABC is that the quality of the approximation to the posterior distribution of model parameters depends on the user-chosen summary statistics. In this paper, the user requirement to choose effective summary statistics in order to accurately estimate the posterior distribution of model parameters is investigated and illustrated by example, using a model and corresponding real data of mitochondrial DNA population dynamics. We show that for some choices of summary statistics, the posterior distribution of model parameters is closely approximated and for other choices of summary statistics, the posterior distribution is not closely approximated. A strategy to choose effective summary statistics is suggested in cases where the stochastic computer model can be run at many trial parameter settings, as in the example. PMID:24288668

  4. Selecting summary statistics in approximate Bayesian computation for calibrating stochastic models.

    PubMed

    Burr, Tom; Skurikhin, Alexei

    2013-01-01

    Approximate Bayesian computation (ABC) is an approach for using measurement data to calibrate stochastic computer models, which are common in biology applications. ABC is becoming the "go-to" option when the data and/or parameter dimension is large because it relies on user-chosen summary statistics rather than the full data and is therefore computationally feasible. One technical challenge with ABC is that the quality of the approximation to the posterior distribution of model parameters depends on the user-chosen summary statistics. In this paper, the user requirement to choose effective summary statistics in order to accurately estimate the posterior distribution of model parameters is investigated and illustrated by example, using a model and corresponding real data of mitochondrial DNA population dynamics. We show that for some choices of summary statistics, the posterior distribution of model parameters is closely approximated and for other choices of summary statistics, the posterior distribution is not closely approximated. A strategy to choose effective summary statistics is suggested in cases where the stochastic computer model can be run at many trial parameter settings, as in the example.

  5. A nonparametric spatial scan statistic for continuous data.

    PubMed

    Jung, Inkyung; Cho, Ho Jin

    2015-10-20

    Spatial scan statistics are widely used for spatial cluster detection, and several parametric models exist. For continuous data, a normal-based scan statistic can be used. However, the performance of the model has not been fully evaluated for non-normal data. We propose a nonparametric spatial scan statistic based on the Wilcoxon rank-sum test statistic and compared the performance of the method with parametric models via a simulation study under various scenarios. The nonparametric method outperforms the normal-based scan statistic in terms of power and accuracy in almost all cases under consideration in the simulation study. The proposed nonparametric spatial scan statistic is therefore an excellent alternative to the normal model for continuous data and is especially useful for data following skewed or heavy-tailed distributions.

  6. An Analysis of the Navy’s Voluntary Education Program

    DTIC Science & Technology

    2007-03-01

    NAVAL ANALYSIS VOLED STUDY .........11 1. Data .........................................11 2. Statistical Models ...........................12 3...B. EMPLOYER FINANCED GENERAL TRAINING ................31 1. Data .........................................32 2. Statistical Model...37 1. Data .........................................38 2. Statistical Model ............................38 3. Findings

  7. Variability-aware compact modeling and statistical circuit validation on SRAM test array

    NASA Astrophysics Data System (ADS)

    Qiao, Ying; Spanos, Costas J.

    2016-03-01

    Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose a variability-aware compact model characterization methodology based on stepwise parameter selection. Transistor I-V measurements are obtained from bit transistor accessible SRAM test array fabricated using a collaborating foundry's 28nm FDSOI technology. Our in-house customized Monte Carlo simulation bench can incorporate these statistical compact models; and simulation results on SRAM writability performance are very close to measurements in distribution estimation. Our proposed statistical compact model parameter extraction methodology also has the potential of predicting non-Gaussian behavior in statistical circuit performances through mixtures of Gaussian distributions.

  8. Towards Accurate Modelling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model

    NASA Astrophysics Data System (ADS)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron K.; Scoccimarro, Roman; Piscionere, Jennifer A.; Wibking, Benjamin D.

    2018-04-01

    Interpreting the small-scale clustering of galaxies with halo models can elucidate the connection between galaxies and dark matter halos. Unfortunately, the modelling is typically not sufficiently accurate for ruling out models statistically. It is thus difficult to use the information encoded in small scales to test cosmological models or probe subtle features of the galaxy-halo connection. In this paper, we attempt to push halo modelling into the "accurate" regime with a fully numerical mock-based methodology and careful treatment of statistical and systematic errors. With our forward-modelling approach, we can incorporate clustering statistics beyond the traditional two-point statistics. We use this modelling methodology to test the standard ΛCDM + halo model against the clustering of SDSS DR7 galaxies. Specifically, we use the projected correlation function, group multiplicity function and galaxy number density as constraints. We find that while the model fits each statistic separately, it struggles to fit them simultaneously. Adding group statistics leads to a more stringent test of the model and significantly tighter constraints on model parameters. We explore the impact of varying the adopted halo definition and cosmological model and find that changing the cosmology makes a significant difference. The most successful model we tried (Planck cosmology with Mvir halos) matches the clustering of low luminosity galaxies, but exhibits a 2.3σ tension with the clustering of luminous galaxies, thus providing evidence that the "standard" halo model needs to be extended. This work opens the door to adding interesting freedom to the halo model and including additional clustering statistics as constraints.

  9. Statistically Modeling Individual Students' Learning over Successive Collaborative Practice Opportunities

    ERIC Educational Resources Information Center

    Olsen, Jennifer; Aleven, Vincent; Rummel, Nikol

    2017-01-01

    Within educational data mining, many statistical models capture the learning of students working individually. However, not much work has been done to extend these statistical models of individual learning to a collaborative setting, despite the effectiveness of collaborative learning activities. We extend a widely used model (the additive factors…

  10. Differences in Performance Among Test Statistics for Assessing Phylogenomic Model Adequacy.

    PubMed

    Duchêne, David A; Duchêne, Sebastian; Ho, Simon Y W

    2018-05-18

    Statistical phylogenetic analyses of genomic data depend on models of nucleotide or amino acid substitution. The adequacy of these substitution models can be assessed using a number of test statistics, allowing the model to be rejected when it is found to provide a poor description of the evolutionary process. A potentially valuable use of model-adequacy test statistics is to identify when data sets are likely to produce unreliable phylogenetic estimates, but their differences in performance are rarely explored. We performed a comprehensive simulation study to identify test statistics that are sensitive to some of the most commonly cited sources of phylogenetic estimation error. Our results show that, for many test statistics, traditional thresholds for assessing model adequacy can fail to reject the model when the phylogenetic inferences are inaccurate and imprecise. This is particularly problematic when analysing loci that have few variable informative sites. We propose new thresholds for assessing substitution model adequacy and demonstrate their effectiveness in analyses of three phylogenomic data sets. These thresholds lead to frequent rejection of the model for loci that yield topological inferences that are imprecise and are likely to be inaccurate. We also propose the use of a summary statistic that provides a practical assessment of overall model adequacy. Our approach offers a promising means of enhancing model choice in genome-scale data sets, potentially leading to improvements in the reliability of phylogenomic inference.

  11. Statistical Models of At-Grade Intersection Accidents. Addendum.

    DOT National Transportation Integrated Search

    2000-03-01

    This report is an addendum to the work published in FHWA-RD-96-125 titled Statistical Models of At-Grade Intersection Accidents. The objective of both research studies was to develop statistical models of the relationship between traffic accide...

  12. A Unified Statistical Rain-Attenuation Model for Communication Link Fade Predictions and Optimal Stochastic Fade Control Design Using a Location-Dependent Rain-Statistic Database

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.

    1990-01-01

    A static and dynamic rain-attenuation model is presented which describes the statistics of attenuation on an arbitrarily specified satellite link for any location for which there are long-term rainfall statistics. The model may be used in the design of the optimal stochastic control algorithms to mitigate the effects of attenuation and maintain link reliability. A rain-statistics data base is compiled, which makes it possible to apply the model to any location in the continental U.S. with a resolution of 0-5 degrees in latitude and longitude. The model predictions are compared with experimental observations, showing good agreement.

  13. Exponential order statistic models of software reliability growth

    NASA Technical Reports Server (NTRS)

    Miller, D. R.

    1985-01-01

    Failure times of a software reliabilty growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.

  14. Evidence for a Global Sampling Process in Extraction of Summary Statistics of Item Sizes in a Set.

    PubMed

    Tokita, Midori; Ueda, Sachiyo; Ishiguchi, Akira

    2016-01-01

    Several studies have shown that our visual system may construct a "summary statistical representation" over groups of visual objects. Although there is a general understanding that human observers can accurately represent sets of a variety of features, many questions on how summary statistics, such as an average, are computed remain unanswered. This study investigated sampling properties of visual information used by human observers to extract two types of summary statistics of item sets, average and variance. We presented three models of ideal observers to extract the summary statistics: a global sampling model without sampling noise, global sampling model with sampling noise, and limited sampling model. We compared the performance of an ideal observer of each model with that of human observers using statistical efficiency analysis. Results suggest that summary statistics of items in a set may be computed without representing individual items, which makes it possible to discard the limited sampling account. Moreover, the extraction of summary statistics may not necessarily require the representation of individual objects with focused attention when the sets of items are larger than 4.

  15. Statistical Power of Alternative Structural Models for Comparative Effectiveness Research: Advantages of Modeling Unreliability.

    PubMed

    Coman, Emil N; Iordache, Eugen; Dierker, Lisa; Fifield, Judith; Schensul, Jean J; Suggs, Suzanne; Barbour, Russell

    2014-05-01

    The advantages of modeling the unreliability of outcomes when evaluating the comparative effectiveness of health interventions is illustrated. Adding an action-research intervention component to a regular summer job program for youth was expected to help in preventing risk behaviors. A series of simple two-group alternative structural equation models are compared to test the effect of the intervention on one key attitudinal outcome in terms of model fit and statistical power with Monte Carlo simulations. Some models presuming parameters equal across the intervention and comparison groups were underpowered to detect the intervention effect, yet modeling the unreliability of the outcome measure increased their statistical power and helped in the detection of the hypothesized effect. Comparative Effectiveness Research (CER) could benefit from flexible multi-group alternative structural models organized in decision trees, and modeling unreliability of measures can be of tremendous help for both the fit of statistical models to the data and their statistical power.

  16. Bayesian models: A statistical primer for ecologists

    USGS Publications Warehouse

    Hobbs, N. Thompson; Hooten, Mevin B.

    2015-01-01

    Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods—in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach.Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probability and develops a step-by-step sequence of connected ideas, including basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and inference from single and multiple models. This unique book places less emphasis on computer coding, favoring instead a concise presentation of the mathematical statistics needed to understand how and why Bayesian analysis works. It also explains how to write out properly formulated hierarchical Bayesian models and use them in computing, research papers, and proposals.This primer enables ecologists to understand the statistical principles behind Bayesian modeling and apply them to research, teaching, policy, and management.Presents the mathematical and statistical foundations of Bayesian modeling in language accessible to non-statisticiansCovers basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and moreDeemphasizes computer coding in favor of basic principlesExplains how to write out properly factored statistical expressions representing Bayesian models

  17. Towards accurate modelling of galaxy clustering on small scales: testing the standard ΛCDM + halo model

    NASA Astrophysics Data System (ADS)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron K.; Scoccimarro, Roman; Piscionere, Jennifer A.; Wibking, Benjamin D.

    2018-07-01

    Interpreting the small-scale clustering of galaxies with halo models can elucidate the connection between galaxies and dark matter haloes. Unfortunately, the modelling is typically not sufficiently accurate for ruling out models statistically. It is thus difficult to use the information encoded in small scales to test cosmological models or probe subtle features of the galaxy-halo connection. In this paper, we attempt to push halo modelling into the `accurate' regime with a fully numerical mock-based methodology and careful treatment of statistical and systematic errors. With our forward-modelling approach, we can incorporate clustering statistics beyond the traditional two-point statistics. We use this modelling methodology to test the standard Λ cold dark matter (ΛCDM) + halo model against the clustering of Sloan Digital Sky Survey (SDSS) seventh data release (DR7) galaxies. Specifically, we use the projected correlation function, group multiplicity function, and galaxy number density as constraints. We find that while the model fits each statistic separately, it struggles to fit them simultaneously. Adding group statistics leads to a more stringent test of the model and significantly tighter constraints on model parameters. We explore the impact of varying the adopted halo definition and cosmological model and find that changing the cosmology makes a significant difference. The most successful model we tried (Planck cosmology with Mvir haloes) matches the clustering of low-luminosity galaxies, but exhibits a 2.3σ tension with the clustering of luminous galaxies, thus providing evidence that the `standard' halo model needs to be extended. This work opens the door to adding interesting freedom to the halo model and including additional clustering statistics as constraints.

  18. Statistics of the geomagnetic secular variation for the past 5Ma

    NASA Technical Reports Server (NTRS)

    Constable, C. G.; Parker, R. L.

    1986-01-01

    A new statistical model is proposed for the geomagnetic secular variation over the past 5Ma. Unlike previous models, the model makes use of statistical characteristics of the present day geomagnetic field. The spatial power spectrum of the non-dipole field is consistent with a white source near the core-mantle boundary with Gaussian distribution. After a suitable scaling, the spherical harmonic coefficients may be regarded as statistical samples from a single giant Gaussian process; this is the model of the non-dipole field. The model can be combined with an arbitrary statistical description of the dipole and probability density functions and cumulative distribution functions can be computed for declination and inclination that would be observed at any site on Earth's surface. Global paleomagnetic data spanning the past 5Ma are used to constrain the statistics of the dipole part of the field. A simple model is found to be consistent with the available data. An advantage of specifying the model in terms of the spherical harmonic coefficients is that it is a complete statistical description of the geomagnetic field, enabling us to test specific properties for a general description. Both intensity and directional data distributions may be tested to see if they satisfy the expected model distributions.

  19. Statistics of the geomagnetic secular variation for the past 5 m.y

    NASA Technical Reports Server (NTRS)

    Constable, C. G.; Parker, R. L.

    1988-01-01

    A new statistical model is proposed for the geomagnetic secular variation over the past 5Ma. Unlike previous models, the model makes use of statistical characteristics of the present day geomagnetic field. The spatial power spectrum of the non-dipole field is consistent with a white source near the core-mantle boundary with Gaussian distribution. After a suitable scaling, the spherical harmonic coefficients may be regarded as statistical samples from a single giant Gaussian process; this is the model of the non-dipole field. The model can be combined with an arbitrary statistical description of the dipole and probability density functions and cumulative distribution functions can be computed for declination and inclination that would be observed at any site on Earth's surface. Global paleomagnetic data spanning the past 5Ma are used to constrain the statistics of the dipole part of the field. A simple model is found to be consistent with the available data. An advantage of specifying the model in terms of the spherical harmonic coefficients is that it is a complete statistical description of the geomagnetic field, enabling us to test specific properties for a general description. Both intensity and directional data distributions may be tested to see if they satisfy the expected model distributions.

  20. Testing prediction methods: Earthquake clustering versus the Poisson model

    USGS Publications Warehouse

    Michael, A.J.

    1997-01-01

    Testing earthquake prediction methods requires statistical techniques that compare observed success to random chance. One technique is to produce simulated earthquake catalogs and measure the relative success of predicting real and simulated earthquakes. The accuracy of these tests depends on the validity of the statistical model used to simulate the earthquakes. This study tests the effect of clustering in the statistical earthquake model on the results. Three simulation models were used to produce significance levels for a VLF earthquake prediction method. As the degree of simulated clustering increases, the statistical significance drops. Hence, the use of a seismicity model with insufficient clustering can lead to overly optimistic results. A successful method must pass the statistical tests with a model that fully replicates the observed clustering. However, a method can be rejected based on tests with a model that contains insufficient clustering. U.S. copyright. Published in 1997 by the American Geophysical Union.

  1. A Statistical Test for Comparing Nonnested Covariance Structure Models.

    ERIC Educational Resources Information Center

    Levy, Roy; Hancock, Gregory R.

    While statistical procedures are well known for comparing hierarchically related (nested) covariance structure models, statistical tests for comparing nonhierarchically related (nonnested) models have proven more elusive. While isolated attempts have been made, none exists within the commonly used maximum likelihood estimation framework, thereby…

  2. A Stochastic Model of Space-Time Variability of Mesoscale Rainfall: Statistics of Spatial Averages

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Bell, Thomas L.

    2003-01-01

    A characteristic feature of rainfall statistics is that they depend on the space and time scales over which rain data are averaged. A previously developed spectral model of rain statistics that is designed to capture this property, predicts power law scaling behavior for the second moment statistics of area-averaged rain rate on the averaging length scale L as L right arrow 0. In the present work a more efficient method of estimating the model parameters is presented, and used to fit the model to the statistics of area-averaged rain rate derived from gridded radar precipitation data from TOGA COARE. Statistical properties of the data and the model predictions are compared over a wide range of averaging scales. An extension of the spectral model scaling relations to describe the dependence of the average fraction of grid boxes within an area containing nonzero rain (the "rainy area fraction") on the grid scale L is also explored.

  3. Modeling Statistics of Fish Patchiness and Predicting Associated Influence on Statistics of Acoustic Echoes

    DTIC Science & Technology

    2012-09-30

    data collected by Paramo and Gerlotto. The data were consistent with the Anderson model in that both the data and model had a mode in the...10.1098/rsfs.2012.0027 [published, refereed] Bhatia, S., T.K. Stanton, J. Paramo , and F. Gerlotto (submitted), “Modeling statistics of fish school

  4. Modified Likelihood-Based Item Fit Statistics for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    Roberts, James S.

    2008-01-01

    Orlando and Thissen (2000) developed an item fit statistic for binary item response theory (IRT) models known as S-X[superscript 2]. This article generalizes their statistic to polytomous unfolding models. Four alternative formulations of S-X[superscript 2] are developed for the generalized graded unfolding model (GGUM). The GGUM is a…

  5. Rainfall Downscaling Conditional on Upper-air Atmospheric Predictors: Improved Assessment of Rainfall Statistics in a Changing Climate

    NASA Astrophysics Data System (ADS)

    Langousis, Andreas; Mamalakis, Antonis; Deidda, Roberto; Marrocu, Marino

    2015-04-01

    To improve the level skill of Global Climate Models (GCMs) and Regional Climate Models (RCMs) in reproducing the statistics of rainfall at a basin level and at hydrologically relevant temporal scales (e.g. daily), two types of statistical approaches have been suggested. One is the statistical correction of climate model rainfall outputs using historical series of precipitation. The other is the use of stochastic models of rainfall to conditionally simulate precipitation series, based on large-scale atmospheric predictors produced by climate models (e.g. geopotential height, relative vorticity, divergence, mean sea level pressure). The latter approach, usually referred to as statistical rainfall downscaling, aims at reproducing the statistical character of rainfall, while accounting for the effects of large-scale atmospheric circulation (and, therefore, climate forcing) on rainfall statistics. While promising, statistical rainfall downscaling has not attracted much attention in recent years, since the suggested approaches involved complex (i.e. subjective or computationally intense) identification procedures of the local weather, in addition to demonstrating limited success in reproducing several statistical features of rainfall, such as seasonal variations, the distributions of dry and wet spell lengths, the distribution of the mean rainfall intensity inside wet periods, and the distribution of rainfall extremes. In an effort to remedy those shortcomings, Langousis and Kaleris (2014) developed a statistical framework for simulation of daily rainfall intensities conditional on upper air variables, which accurately reproduces the statistical character of rainfall at multiple time-scales. Here, we study the relative performance of: a) quantile-quantile (Q-Q) correction of climate model rainfall products, and b) the statistical downscaling scheme of Langousis and Kaleris (2014), in reproducing the statistical structure of rainfall, as well as rainfall extremes, at a regional level. This is done for an intermediate-sized catchment in Italy, i.e. the Flumendosa catchment, using climate model rainfall and atmospheric data from the ENSEMBLES project (http://ensembleseu.metoffice.com). In doing so, we split the historical rainfall record of mean areal precipitation (MAP) in 15-year calibration and 45-year validation periods, and compare the historical rainfall statistics to those obtained from: a) Q-Q corrected climate model rainfall products, and b) synthetic rainfall series generated by the suggested downscaling scheme. To our knowledge, this is the first time that climate model rainfall and statistically downscaled precipitation are compared to catchment-averaged MAP at a daily resolution. The obtained results are promising, since the proposed downscaling scheme is more accurate and robust in reproducing a number of historical rainfall statistics, independent of the climate model used and the length of the calibration period. This is particularly the case for the yearly rainfall maxima, where direct statistical correction of climate model rainfall outputs shows increased sensitivity to the length of the calibration period and the climate model used. The robustness of the suggested downscaling scheme in modeling rainfall extremes at a daily resolution, is a notable feature that can effectively be used to assess hydrologic risk at a regional level under changing climatic conditions. Acknowledgments The research project is implemented within the framework of the Action «Supporting Postdoctoral Researchers» of the Operational Program "Education and Lifelong Learning" (Action's Beneficiary: General Secretariat for Research and Technology), and is co-financed by the European Social Fund (ESF) and the Greek State. CRS4 highly acknowledges the contribution of the Sardinian regional authorities.

  6. A New Statistic for Evaluating Item Response Theory Models for Ordinal Data. CRESST Report 839

    ERIC Educational Resources Information Center

    Cai, Li; Monroe, Scott

    2014-01-01

    We propose a new limited-information goodness of fit test statistic C[subscript 2] for ordinal IRT models. The construction of the new statistic lies formally between the M[subscript 2] statistic of Maydeu-Olivares and Joe (2006), which utilizes first and second order marginal probabilities, and the M*[subscript 2] statistic of Cai and Hansen…

  7. Investigation of Statistical Inference Methodologies Through Scale Model Propagation Experiments

    DTIC Science & Technology

    2015-09-30

    statistical inference methodologies for ocean- acoustic problems by investigating and applying statistical methods to data collected from scale-model...to begin planning experiments for statistical inference applications. APPROACH In the ocean acoustics community over the past two decades...solutions for waveguide parameters. With the introduction of statistical inference to the field of ocean acoustics came the desire to interpret marginal

  8. Numerical and Qualitative Contrasts of Two Statistical Models for Water Quality Change in Tidal Waters

    EPA Science Inventory

    Two statistical approaches, weighted regression on time, discharge, and season and generalized additive models, have recently been used to evaluate water quality trends in estuaries. Both models have been used in similar contexts despite differences in statistical foundations and...

  9. “Plateau”-related summary statistics are uninformative for comparing working memory models

    PubMed Central

    van den Berg, Ronald; Ma, Wei Ji

    2014-01-01

    Performance on visual working memory tasks decreases as more items need to be remembered. Over the past decade, a debate has unfolded between proponents of slot models and slotless models of this phenomenon. Zhang and Luck (2008) and Anderson, Vogel, and Awh (2011) noticed that as more items need to be remembered, “memory noise” seems to first increase and then reach a “stable plateau.” They argued that three summary statistics characterizing this plateau are consistent with slot models, but not with slotless models. Here, we assess the validity of their methods. We generated synthetic data both from a leading slot model and from a recent slotless model and quantified model evidence using log Bayes factors. We found that the summary statistics provided, at most, 0.15% of the expected model evidence in the raw data. In a model recovery analysis, a total of more than a million trials were required to achieve 99% correct recovery when models were compared on the basis of summary statistics, whereas fewer than 1,000 trials were sufficient when raw data were used. At realistic numbers of trials, plateau-related summary statistics are completely unreliable for model comparison. Applying the same analyses to subject data from Anderson et al. (2011), we found that the evidence in the summary statistics was, at most, 0.12% of the evidence in the raw data and far too weak to warrant any conclusions. These findings call into question claims about working memory that are based on summary statistics. PMID:24719235

  10. Probabilistic Graphical Model Representation in Phylogenetics

    PubMed Central

    Höhna, Sebastian; Heath, Tracy A.; Boussau, Bastien; Landis, Michael J.; Ronquist, Fredrik; Huelsenbeck, John P.

    2014-01-01

    Recent years have seen a rapid expansion of the model space explored in statistical phylogenetics, emphasizing the need for new approaches to statistical model representation and software development. Clear communication and representation of the chosen model is crucial for: (i) reproducibility of an analysis, (ii) model development, and (iii) software design. Moreover, a unified, clear and understandable framework for model representation lowers the barrier for beginners and nonspecialists to grasp complex phylogenetic models, including their assumptions and parameter/variable dependencies. Graphical modeling is a unifying framework that has gained in popularity in the statistical literature in recent years. The core idea is to break complex models into conditionally independent distributions. The strength lies in the comprehensibility, flexibility, and adaptability of this formalism, and the large body of computational work based on it. Graphical models are well-suited to teach statistical models, to facilitate communication among phylogeneticists and in the development of generic software for simulation and statistical inference. Here, we provide an introduction to graphical models for phylogeneticists and extend the standard graphical model representation to the realm of phylogenetics. We introduce a new graphical model component, tree plates, to capture the changing structure of the subgraph corresponding to a phylogenetic tree. We describe a range of phylogenetic models using the graphical model framework and introduce modules to simplify the representation of standard components in large and complex models. Phylogenetic model graphs can be readily used in simulation, maximum likelihood inference, and Bayesian inference using, for example, Metropolis–Hastings or Gibbs sampling of the posterior distribution. [Computation; graphical models; inference; modularization; statistical phylogenetics; tree plate.] PMID:24951559

  11. The Development of Statistical Models for Predicting Surgical Site Infections in Japan: Toward a Statistical Model-Based Standardized Infection Ratio.

    PubMed

    Fukuda, Haruhisa; Kuroki, Manabu

    2016-03-01

    To develop and internally validate a surgical site infection (SSI) prediction model for Japan. Retrospective observational cohort study. We analyzed surveillance data submitted to the Japan Nosocomial Infections Surveillance system for patients who had undergone target surgical procedures from January 1, 2010, through December 31, 2012. Logistic regression analyses were used to develop statistical models for predicting SSIs. An SSI prediction model was constructed for each of the procedure categories by statistically selecting the appropriate risk factors from among the collected surveillance data and determining their optimal categorization. Standard bootstrapping techniques were applied to assess potential overfitting. The C-index was used to compare the predictive performances of the new statistical models with those of models based on conventional risk index variables. The study sample comprised 349,987 cases from 428 participant hospitals throughout Japan, and the overall SSI incidence was 7.0%. The C-indices of the new statistical models were significantly higher than those of the conventional risk index models in 21 (67.7%) of the 31 procedure categories (P<.05). No significant overfitting was detected. Japan-specific SSI prediction models were shown to generally have higher accuracy than conventional risk index models. These new models may have applications in assessing hospital performance and identifying high-risk patients in specific procedure categories.

  12. A Hierarchical Multivariate Bayesian Approach to Ensemble Model output Statistics in Atmospheric Prediction

    DTIC Science & Technology

    2017-09-01

    efficacy of statistical post-processing methods downstream of these dynamical model components with a hierarchical multivariate Bayesian approach to...Bayesian hierarchical modeling, Markov chain Monte Carlo methods , Metropolis algorithm, machine learning, atmospheric prediction 15. NUMBER OF PAGES...scale processes. However, this dissertation explores the efficacy of statistical post-processing methods downstream of these dynamical model components

  13. Stochastic or statistic? Comparing flow duration curve models in ungauged basins and changing climates

    NASA Astrophysics Data System (ADS)

    Müller, M. F.; Thompson, S. E.

    2015-09-01

    The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drives of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by a strong wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are strongly favored over statistical models.

  14. Comparing statistical and process-based flow duration curve models in ungauged basins and changing rain regimes

    NASA Astrophysics Data System (ADS)

    Müller, M. F.; Thompson, S. E.

    2016-02-01

    The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drivers of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by frequent wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are favored over statistical models.

  15. Manifold parametrization of the left ventricle for a statistical modelling of its complete anatomy

    NASA Astrophysics Data System (ADS)

    Gil, D.; Garcia-Barnes, J.; Hernández-Sabate, A.; Marti, E.

    2010-03-01

    Distortion of Left Ventricle (LV) external anatomy is related to some dysfunctions, such as hypertrophy. The architecture of myocardial fibers determines LV electromechanical activation patterns as well as mechanics. Thus, their joined modelling would allow the design of specific interventions (such as peacemaker implantation and LV remodelling) and therapies (such as resynchronization). On one hand, accurate modelling of external anatomy requires either a dense sampling or a continuous infinite dimensional approach, which requires non-Euclidean statistics. On the other hand, computation of fiber models requires statistics on Riemannian spaces. Most approaches compute separate statistical models for external anatomy and fibers architecture. In this work we propose a general mathematical framework based on differential geometry concepts for computing a statistical model including, both, external and fiber anatomy. Our framework provides a continuous approach to external anatomy supporting standard statistics. We also provide a straightforward formula for the computation of the Riemannian fiber statistics. We have applied our methodology to the computation of complete anatomical atlas of canine hearts from diffusion tensor studies. The orientation of fibers over the average external geometry agrees with the segmental description of orientations reported in the literature.

  16. An Examination of Statistical Power in Multigroup Dynamic Structural Equation Models

    ERIC Educational Resources Information Center

    Prindle, John J.; McArdle, John J.

    2012-01-01

    This study used statistical simulation to calculate differential statistical power in dynamic structural equation models with groups (as in McArdle & Prindle, 2008). Patterns of between-group differences were simulated to provide insight into how model parameters influence power approximations. Chi-square and root mean square error of…

  17. TinkerPlots™ Model Construction Approaches for Comparing Two Groups: Student Perspectives

    ERIC Educational Resources Information Center

    Noll, Jennifer; Kirin, Dana

    2017-01-01

    Teaching introductory statistics using curricula focused on modeling and simulation is becoming increasingly common in introductory statistics courses and touted as a more beneficial approach for fostering students' statistical thinking. Yet, surprisingly little research has been conducted to study the impact of modeling and simulation curricula…

  18. Journal of Transportation and Statistics, Vol. 3, No. 2 : special issue on the statistical analysis and modeling of automotive emissions

    DOT National Transportation Integrated Search

    2000-09-01

    This special issue of the Journal of Transportation and Statistics is devoted to the statistical analysis and modeling of automotive emissions. It contains many of the papers presented in the mini-symposium last August and also includes one additiona...

  19. Heads Up! a Calculation- & Jargon-Free Approach to Statistics

    ERIC Educational Resources Information Center

    Giese, Alan R.

    2012-01-01

    Evaluating the strength of evidence in noisy data is a critical step in scientific thinking that typically relies on statistics. Students without statistical training will benefit from heuristic models that highlight the logic of statistical analysis. The likelihood associated with various coin-tossing outcomes gives students such a model. There…

  20. Comparative evaluation of statistical and mechanistic models of Escherichia coli at beaches in southern Lake Michigan

    USGS Publications Warehouse

    Safaie, Ammar; Wendzel, Aaron; Ge, Zhongfu; Nevers, Meredith; Whitman, Richard L.; Corsi, Steven R.; Phanikumar, Mantha S.

    2016-01-01

    Statistical and mechanistic models are popular tools for predicting the levels of indicator bacteria at recreational beaches. Researchers tend to use one class of model or the other, and it is difficult to generalize statements about their relative performance due to differences in how the models are developed, tested, and used. We describe a cooperative modeling approach for freshwater beaches impacted by point sources in which insights derived from mechanistic modeling were used to further improve the statistical models and vice versa. The statistical models provided a basis for assessing the mechanistic models which were further improved using probability distributions to generate high-resolution time series data at the source, long-term “tracer” transport modeling based on observed electrical conductivity, better assimilation of meteorological data, and the use of unstructured-grids to better resolve nearshore features. This approach resulted in improved models of comparable performance for both classes including a parsimonious statistical model suitable for real-time predictions based on an easily measurable environmental variable (turbidity). The modeling approach outlined here can be used at other sites impacted by point sources and has the potential to improve water quality predictions resulting in more accurate estimates of beach closures.

  1. Augmenting Latent Dirichlet Allocation and Rank Threshold Detection with Ontologies

    DTIC Science & Technology

    2010-03-01

    Probabilistic Latent Semantic Indexing (PLSI) is an automated indexing information retrieval model [20]. It is based on a statistical latent class model which is...uses a statistical foundation that is more accurate in finding hidden semantic relationships [20]. The model uses factor analysis of count data, number...principle of statistical infer- ence which asserts that all of the information in a sample is contained in the likelihood function [20]. The statistical

  2. A Census of Statistics Requirements at U.S. Journalism Programs and a Model for a "Statistics for Journalism" Course

    ERIC Educational Resources Information Center

    Martin, Justin D.

    2017-01-01

    This essay presents data from a census of statistics requirements and offerings at all 4-year journalism programs in the United States (N = 369) and proposes a model of a potential course in statistics for journalism majors. The author proposes that three philosophies underlie a statistics course for journalism students. Such a course should (a)…

  3. "Plateau"-related summary statistics are uninformative for comparing working memory models.

    PubMed

    van den Berg, Ronald; Ma, Wei Ji

    2014-10-01

    Performance on visual working memory tasks decreases as more items need to be remembered. Over the past decade, a debate has unfolded between proponents of slot models and slotless models of this phenomenon (Ma, Husain, Bays (Nature Neuroscience 17, 347-356, 2014). Zhang and Luck (Nature 453, (7192), 233-235, 2008) and Anderson, Vogel, and Awh (Attention, Perception, Psychophys 74, (5), 891-910, 2011) noticed that as more items need to be remembered, "memory noise" seems to first increase and then reach a "stable plateau." They argued that three summary statistics characterizing this plateau are consistent with slot models, but not with slotless models. Here, we assess the validity of their methods. We generated synthetic data both from a leading slot model and from a recent slotless model and quantified model evidence using log Bayes factors. We found that the summary statistics provided at most 0.15 % of the expected model evidence in the raw data. In a model recovery analysis, a total of more than a million trials were required to achieve 99 % correct recovery when models were compared on the basis of summary statistics, whereas fewer than 1,000 trials were sufficient when raw data were used. Therefore, at realistic numbers of trials, plateau-related summary statistics are highly unreliable for model comparison. Applying the same analyses to subject data from Anderson et al. (Attention, Perception, Psychophys 74, (5), 891-910, 2011), we found that the evidence in the summary statistics was at most 0.12 % of the evidence in the raw data and far too weak to warrant any conclusions. The evidence in the raw data, in fact, strongly favored the slotless model. These findings call into question claims about working memory that are based on summary statistics.

  4. Analyzing longitudinal data with the linear mixed models procedure in SPSS.

    PubMed

    West, Brady T

    2009-09-01

    Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.

  5. Equilibrium statistical-thermal models in high-energy physics

    NASA Astrophysics Data System (ADS)

    Tawfik, Abdel Nasser

    2014-05-01

    We review some recent highlights from the applications of statistical-thermal models to different experimental measurements and lattice QCD thermodynamics that have been made during the last decade. We start with a short review of the historical milestones on the path of constructing statistical-thermal models for heavy-ion physics. We discovered that Heinz Koppe formulated in 1948, an almost complete recipe for the statistical-thermal models. In 1950, Enrico Fermi generalized this statistical approach, in which he started with a general cross-section formula and inserted into it, the simplifying assumptions about the matrix element of the interaction process that likely reflects many features of the high-energy reactions dominated by density in the phase space of final states. In 1964, Hagedorn systematically analyzed the high-energy phenomena using all tools of statistical physics and introduced the concept of limiting temperature based on the statistical bootstrap model. It turns to be quite often that many-particle systems can be studied with the help of statistical-thermal methods. The analysis of yield multiplicities in high-energy collisions gives an overwhelming evidence for the chemical equilibrium in the final state. The strange particles might be an exception, as they are suppressed at lower beam energies. However, their relative yields fulfill statistical equilibrium, as well. We review the equilibrium statistical-thermal models for particle production, fluctuations and collective flow in heavy-ion experiments. We also review their reproduction of the lattice QCD thermodynamics at vanishing and finite chemical potential. During the last decade, five conditions have been suggested to describe the universal behavior of the chemical freeze-out parameters. The higher order moments of multiplicity have been discussed. They offer deep insights about particle production and to critical fluctuations. Therefore, we use them to describe the freeze-out parameters and suggest the location of the QCD critical endpoint. Various extensions have been proposed in order to take into consideration the possible deviations of the ideal hadron gas. We highlight various types of interactions, dissipative properties and location-dependences (spatial rapidity). Furthermore, we review three models combining hadronic with partonic phases; quasi-particle model, linear sigma model with Polyakov potentials and compressible bag model.

  6. Improvements to an earth observing statistical performance model with applications to LWIR spectral variability

    NASA Astrophysics Data System (ADS)

    Zhao, Runchen; Ientilucci, Emmett J.

    2017-05-01

    Hyperspectral remote sensing systems provide spectral data composed of hundreds of narrow spectral bands. Spectral remote sensing systems can be used to identify targets, for example, without physical interaction. Often it is of interested to characterize the spectral variability of targets or objects. The purpose of this paper is to identify and characterize the LWIR spectral variability of targets based on an improved earth observing statistical performance model, known as the Forecasting and Analysis of Spectroradiometric System Performance (FASSP) model. FASSP contains three basic modules including a scene model, sensor model and a processing model. Instead of using mean surface reflectance only as input to the model, FASSP transfers user defined statistical characteristics of a scene through the image chain (i.e., from source to sensor). The radiative transfer model, MODTRAN, is used to simulate the radiative transfer based on user defined atmospheric parameters. To retrieve class emissivity and temperature statistics, or temperature / emissivity separation (TES), a LWIR atmospheric compensation method is necessary. The FASSP model has a method to transform statistics in the visible (ie., ELM) but currently does not have LWIR TES algorithm in place. This paper addresses the implementation of such a TES algorithm and its associated transformation of statistics.

  7. NASA Tech Briefs, April 2011

    NASA Technical Reports Server (NTRS)

    2011-01-01

    Topics covered include: Amperometric Solid Electrolyte Oxygen Microsensors with Easy Batch Fabrication; Two-Axis Direct Fluid Shear Stress Sensor for Aerodynamic Applications; Target Assembly to Check Boresight Alignment of Active Sensors; Virtual Sensor Test Instrumentation; Evaluation of the Reflection Coefficient of Microstrip Elements for Reflectarray Antennas; Miniaturized Ka-Band Dual-Channel Radar; Continuous-Integration Laser Energy Lidar Monitor; Miniaturized Airborne Imaging Central Server System; Radiation-Tolerant, SpaceWire-Compatible Switching Fabric; Small Microprocessor for ASIC or FPGA Implementation; Source-Coupled, N-Channel, JFET-Based Digital Logic Gate Structure Using Resistive Level Shifters; High-Voltage-Input Level Translator Using Standard CMOS; Monitoring Digital Closed-Loop Feedback Systems; MASCOT - MATLAB Stability and Control Toolbox; MIRO Continuum Calibration for Asteroid Mode; GOATS Image Projection Component; Coded Modulation in C and MATLAB; Low-Dead-Volume Inlet for Vacuum Chamber; Thermal Control Method for High-Current Wire Bundles by Injecting a Thermally Conductive Filler; Method for Selective Cleaning of Mold Release from Composite Honeycomb Surfaces; Infrared-Bolometer Arrays with Reflective Backshorts; Commercialization of LARC (trade mark) -SI Polyimide Technology; Novel Low-Density Ablators Containing Hyperbranched Poly(azomethine)s; Carbon Nanotubes on Titanium Substrates for Stray Light Suppression; Monolithic, High-Speed Fiber-Optic Switching Array for Lidar; Grid-Tied Photovoltaic Power System; Spectroelectrochemical Instrument Measures TOC; A Miniaturized Video System for Monitoring Drosophila Behavior; Hydrofocusing Bioreactor Produces Anti-Cancer Alkaloids; Creep Measurement Video Extensometer; Radius of Curvature Measurement of Large Optics Using Interferometry and Laser Tracker n-B-pi-p Superlattice Infrared Detector; Safe Onboard Guidance and Control Under Probabilistic Uncertainty; General Tool for Evaluating High-Contrast Coronagraphic Telescope Performance Error Budgets; Hidden Statistics of Schroedinger Equation; Optimal Padding for the Two-Dimensional Fast Fourier Transform; Spatial Query for Planetary Data; Higher Order Mode Coupling in Feed Waveguide of a Planar Slot Array Antenna; Evolutionary Computational Methods for Identifying Emergent Behavior in Autonomous Systems; Sampling Theorem in Terms of the Bandwidth and Sampling Interval; Meteoroid/Orbital Debris Shield Engineering Development Practice and Procedure; Self-Balancing, Optical-Center-Pivot, Fast-Steering Mirror; Wireless Orbiter Hang-Angle Inclinometer System; and Internal Electrostatic Discharge Monitor - IESDM.

  8. Bayesian statistics in medicine: a 25 year review.

    PubMed

    Ashby, Deborah

    2006-11-15

    This review examines the state of Bayesian thinking as Statistics in Medicine was launched in 1982, reflecting particularly on its applicability and uses in medical research. It then looks at each subsequent five-year epoch, with a focus on papers appearing in Statistics in Medicine, putting these in the context of major developments in Bayesian thinking and computation with reference to important books, landmark meetings and seminal papers. It charts the growth of Bayesian statistics as it is applied to medicine and makes predictions for the future. From sparse beginnings, where Bayesian statistics was barely mentioned, Bayesian statistics has now permeated all the major areas of medical statistics, including clinical trials, epidemiology, meta-analyses and evidence synthesis, spatial modelling, longitudinal modelling, survival modelling, molecular genetics and decision-making in respect of new technologies.

  9. The epistemology of mathematical and statistical modeling: a quiet methodological revolution.

    PubMed

    Rodgers, Joseph Lee

    2010-01-01

    A quiet methodological revolution, a modeling revolution, has occurred over the past several decades, almost without discussion. In contrast, the 20th century ended with contentious argument over the utility of null hypothesis significance testing (NHST). The NHST controversy may have been at least partially irrelevant, because in certain ways the modeling revolution obviated the NHST argument. I begin with a history of NHST and modeling and their relation to one another. Next, I define and illustrate principles involved in developing and evaluating mathematical models. Following, I discuss the difference between using statistical procedures within a rule-based framework and building mathematical models from a scientific epistemology. Only the former is treated carefully in most psychology graduate training. The pedagogical implications of this imbalance and the revised pedagogy required to account for the modeling revolution are described. To conclude, I discuss how attention to modeling implies shifting statistical practice in certain progressive ways. The epistemological basis of statistics has moved away from being a set of procedures, applied mechanistically, and moved toward building and evaluating statistical and scientific models. Copyrigiht 2009 APA, all rights reserved.

  10. Local sensitivity analysis for inverse problems solved by singular value decomposition

    USGS Publications Warehouse

    Hill, M.C.; Nolan, B.T.

    2010-01-01

    Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.

  11. Derivative Free Optimization of Complex Systems with the Use of Statistical Machine Learning Models

    DTIC Science & Technology

    2015-09-12

    AFRL-AFOSR-VA-TR-2015-0278 DERIVATIVE FREE OPTIMIZATION OF COMPLEX SYSTEMS WITH THE USE OF STATISTICAL MACHINE LEARNING MODELS Katya Scheinberg...COMPLEX SYSTEMS WITH THE USE OF STATISTICAL MACHINE LEARNING MODELS 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA9550-11-1-0239 5c.  PROGRAM ELEMENT...developed, which has been the focus of our research. 15. SUBJECT TERMS optimization, Derivative-Free Optimization, Statistical Machine Learning 16. SECURITY

  12. 78 FR 70303 - Announcement of Requirements and Registration for the Predict the Influenza Season Challenge

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-25

    ... public. Mathematical and statistical models can be useful in predicting the timing and impact of the... applying any mathematical, statistical, or other approach to predictive modeling. This challenge will... Services (HHS) region level(s) in the United States by developing mathematical and statistical models that...

  13. Developing Statistical Knowledge for Teaching during Design-Based Research

    ERIC Educational Resources Information Center

    Groth, Randall E.

    2017-01-01

    Statistical knowledge for teaching is not precisely equivalent to statistics subject matter knowledge. Teachers must know how to make statistics understandable to others as well as understand the subject matter themselves. This dual demand on teachers calls for the development of viable teacher education models. This paper offers one such model,…

  14. Maximum entropy models of ecosystem functioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertram, Jason, E-mail: jason.bertram@anu.edu.au

    2014-12-05

    Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on themore » information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.« less

  15. The crossing statistic: dealing with unknown errors in the dispersion of Type Ia supernovae

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shafieloo, Arman; Clifton, Timothy; Ferreira, Pedro, E-mail: arman@ewha.ac.kr, E-mail: tclifton@astro.ox.ac.uk, E-mail: p.ferreira1@physics.ox.ac.uk

    2011-08-01

    We propose a new statistic that has been designed to be used in situations where the intrinsic dispersion of a data set is not well known: The Crossing Statistic. This statistic is in general less sensitive than χ{sup 2} to the intrinsic dispersion of the data, and hence allows us to make progress in distinguishing between different models using goodness of fit to the data even when the errors involved are poorly understood. The proposed statistic makes use of the shape and trends of a model's predictions in a quantifiable manner. It is applicable to a variety of circumstances, althoughmore » we consider it to be especially well suited to the task of distinguishing between different cosmological models using type Ia supernovae. We show that this statistic can easily distinguish between different models in cases where the χ{sup 2} statistic fails. We also show that the last mode of the Crossing Statistic is identical to χ{sup 2}, so that it can be considered as a generalization of χ{sup 2}.« less

  16. Security of statistical data bases: invasion of privacy through attribute correlational modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palley, M.A.

    This study develops, defines, and applies a statistical technique for the compromise of confidential information in a statistical data base. Attribute Correlational Modeling (ACM) recognizes that the information contained in a statistical data base represents real world statistical phenomena. As such, ACM assumes correlational behavior among the database attributes. ACM proceeds to compromise confidential information through creation of a regression model, where the confidential attribute is treated as the dependent variable. The typical statistical data base may preclude the direct application of regression. In this scenario, the research introduces the notion of a synthetic data base, created through legitimate queriesmore » of the actual data base, and through proportional random variation of responses to these queries. The synthetic data base is constructed to resemble the actual data base as closely as possible in a statistical sense. ACM then applies regression analysis to the synthetic data base, and utilizes the derived model to estimate confidential information in the actual database.« less

  17. Network Data: Statistical Theory and New Models

    DTIC Science & Technology

    2016-02-17

    SECURITY CLASSIFICATION OF: During this period of review, Bin Yu worked on many thrusts of high-dimensional statistical theory and methodologies. Her...research covered a wide range of topics in statistics including analysis and methods for spectral clustering for sparse and structured networks...2,7,8,21], sparse modeling (e.g. Lasso) [4,10,11,17,18,19], statistical guarantees for the EM algorithm [3], statistical analysis of algorithm leveraging

  18. Spectrum of Quantized Energy for a Lengthening Pendulum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Jeong Ryeol; Song, Ji Nny; Hong, Seong Ju

    We considered a quantum system of simple pendulum whose length of string is increasing at a steady rate. Since the string length is represented as a time function, this system is described by a time-dependent Hamiltonian. The invariant operator method is very useful in solving the quantum solutions of time-dependent Hamiltonian systems like this. The invariant operator of the system is represented in terms of the lowering operator a(t) and the raising operator a{sup {dagger}}(t). The Schroedinger solutions {psi}{sub n}({theta}, t) whose spectrum is discrete are obtained by means of the invariant operator. The expectation value of the Hamiltonian inmore » the {psi}{sub n}({theta}, t) state is the same as the quantum energy. At first, we considered only {theta}{sup 2} term in the Hamiltonian in order to evaluate the quantized energy. The numerical study for quantum energy correction is also made by considering the angle variable not only up to {theta}{sup 4} term but also up to {theta}{sup 6} term in the Hamiltonian, using the perturbation theory.« less

  19. Quantum dynamics of a plane pendulum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leibscher, Monika; Schmidt, Burkhard

    A semianalytical approach to the quantum dynamics of a plane pendulum is developed, based on Mathieu functions which appear as stationary wave functions. The time-dependent Schroedinger equation is solved for pendular analogs of coherent and squeezed states of a harmonic oscillator, induced by instantaneous changes of the periodic potential energy function. Coherent pendular states are discussed between the harmonic limit for small displacements and the inverted pendulum limit, while squeezed pendular states are shown to interpolate between vibrational and free rotational motion. In the latter case, full and fractional revivals as well as spatiotemporal structures in the time evolution ofmore » the probability densities (quantum carpets) are quantitatively analyzed. Corresponding expressions for the mean orientation are derived in terms of Mathieu functions in time. For periodic double well potentials, different revival schemes, and different quantum carpets are found for the even and odd initial states forming the ground tunneling doublet. Time evolution of the mean alignment allows the separation of states with different parity. Implications for external (rotational) and internal (torsional) motion of molecules induced by intense laser fields are discussed.« less

  20. An Electron is the God Particle

    NASA Astrophysics Data System (ADS)

    Wolff, Milo

    2001-04-01

    Philosophers, Clifford, Mach, Einstein, Wyle, Dirac & Schroedinger, believed that only a wave structure of particles could satisfy experiment and fulfill reality. A quantum Wave Structure of Matter is described here. It predicts the natural laws more accurately and completely than classic laws. Einstein reasoned that the universe depends on particles which are "spherically, spatially extended in space." and "Hence a discrete material particle has no place as a fundamental concept in a field theory." Thus the discrete point particle was wrong. He deduced the true electron is primal because its force range is infinite. Now, it is found the electron's wave structure contains the laws of Nature that rule the universe. The electron plays the role of creator - the God particle. Electron structure is a pair of spherical outward/inward quantum waves, convergent to a center in 3D space. This wave pair creates a h/4pi quantum spin when the in-wave spherically rotates to become the out-wave. Both waves form a spinor satisfying the Dirac Equation. Thus, the universe is binary like a computer. Reference: http://members.tripod.com/mwolff

  1. SHORT-WAVELENGTH MAGNETIC BUOYANCY INSTABILITY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mizerski, K. A.; Davies, C. R.; Hughes, D. W., E-mail: kamiz@igf.edu.pl, E-mail: tina@maths.leeds.ac.uk, E-mail: d.w.hughes@leeds.ac.uk

    2013-04-01

    Magnetic buoyancy instability plays an important role in the evolution of astrophysical magnetic fields. Here we revisit the problem introduced by Gilman of the short-wavelength linear stability of a plane layer of compressible isothermal fluid permeated by a horizontal magnetic field of strength decreasing with height. Dissipation of momentum and magnetic field is neglected. By the use of a Rayleigh-Schroedinger perturbation analysis, we explain in detail the limit in which the transverse horizontal wavenumber of the perturbation, denoted by k, is large (i.e., short horizontal wavelength) and show that the fastest growing perturbations become localized in the vertical direction asmore » k is increased. The growth rates are determined by a function of the vertical coordinate z since, in the large k limit, the eigenmodes are strongly localized in the vertical direction. We consider in detail the case of two-dimensional perturbations varying in the directions perpendicular to the magnetic field, which, for sufficiently strong field gradients, are the most unstable. The results of our analysis are backed up by comparison with a series of initial value problems. Finally, we extend the analysis to three-dimensional perturbations.« less

  2. Search for Effects of an Electrostatic Potential on Clocks in the Frame of Reference of a Charged Particle

    NASA Technical Reports Server (NTRS)

    Ringermacher, Harry I.; Conradi, Mark S.; Cassenti, Brice

    2005-01-01

    Results of experiments to confirm a theory that links classical electromagnetism with the geometry of spacetime are described. The theory, based on the introduction of a Torsion tensor into Einstein s equations and following the approach of Schroedinger, predicts effects on clocks attached to charged particles, subject to intense electric fields, analogous to the effects on clocks in a gravitational field. We show that in order to interpret this theory, one must re-interpret all clock changes, both gravitational and electromagnetic, as arising from changes in potential energy and not merely potential. The clock is provided naturally by proton spins in hydrogen atoms subject to Nuclear Magnetic Resonance trials. No frequency change of clocks was observed to a resolution of 6310(exp -9). A new "Clock Principle" was postulated to explain the null result. There are two possible implications of the experiments: (a) The Clock Principle is invalid and, in fact, no metric theory incorporating electromagnetism is possible; (b) The Clock Principle is valid and it follows that a negative rest mass cannot exist.

  3. Calculation of the electron wave function in a graded-channel double-heterojunction modulation-doped field-effect transistor

    NASA Technical Reports Server (NTRS)

    Mui, D. S. L.; Patil, M. B.; Morkoc, H.

    1989-01-01

    Three double-heterojunction modulation-doped field-effect transistor structures with different channel composition are investigated theoretically. All of these transistors have an In(x)Ga(1-x)As channel sandwiched between two doped Al(0.3)Ga(0.7)As barriers with undoped spacer layers. In one of the structures, x varies from 0 from either heterojunction to 0.15 at the center of the channel quadratically; in the other two, constant values of x of 0 and 0.15 are used. The Poisson and Schroedinger equations are solved self-consistently for the electron wave function in all three cases. The results showed that the two-dimensional electron gas (2DEG) concentration in the channel of the quadratically graded structure is higher than the x = 0 one and slightly lower than the x = 0.15 one, and the mean distance of the 2DEG is closer to the center of the channel for this transistor than the other two. These two effects have important implications on the electron mobility in the channel.

  4. Evaluating statistical consistency in the ocean model component of the Community Earth System Model (pyCECT v2.0)

    NASA Astrophysics Data System (ADS)

    Baker, Allison H.; Hu, Yong; Hammerling, Dorit M.; Tseng, Yu-heng; Xu, Haiying; Huang, Xiaomeng; Bryan, Frank O.; Yang, Guangwen

    2016-07-01

    The Parallel Ocean Program (POP), the ocean model component of the Community Earth System Model (CESM), is widely used in climate research. Most current work in CESM-POP focuses on improving the model's efficiency or accuracy, such as improving numerical methods, advancing parameterization, porting to new architectures, or increasing parallelism. Since ocean dynamics are chaotic in nature, achieving bit-for-bit (BFB) identical results in ocean solutions cannot be guaranteed for even tiny code modifications, and determining whether modifications are admissible (i.e., statistically consistent with the original results) is non-trivial. In recent work, an ensemble-based statistical approach was shown to work well for software verification (i.e., quality assurance) on atmospheric model data. The general idea of the ensemble-based statistical consistency testing is to use a qualitative measurement of the variability of the ensemble of simulations as a metric with which to compare future simulations and make a determination of statistical distinguishability. The capability to determine consistency without BFB results boosts model confidence and provides the flexibility needed, for example, for more aggressive code optimizations and the use of heterogeneous execution environments. Since ocean and atmosphere models have differing characteristics in term of dynamics, spatial variability, and timescales, we present a new statistical method to evaluate ocean model simulation data that requires the evaluation of ensemble means and deviations in a spatial manner. In particular, the statistical distribution from an ensemble of CESM-POP simulations is used to determine the standard score of any new model solution at each grid point. Then the percentage of points that have scores greater than a specified threshold indicates whether the new model simulation is statistically distinguishable from the ensemble simulations. Both ensemble size and composition are important. Our experiments indicate that the new POP ensemble consistency test (POP-ECT) tool is capable of distinguishing cases that should be statistically consistent with the ensemble and those that should not, as well as providing a simple, subjective and systematic way to detect errors in CESM-POP due to the hardware or software stack, positively contributing to quality assurance for the CESM-POP code.

  5. Bayesian models based on test statistics for multiple hypothesis testing problems.

    PubMed

    Ji, Yuan; Lu, Yiling; Mills, Gordon B

    2008-04-01

    We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.

  6. Different Manhattan project: automatic statistical model generation

    NASA Astrophysics Data System (ADS)

    Yap, Chee Keng; Biermann, Henning; Hertzmann, Aaron; Li, Chen; Meyer, Jon; Pao, Hsing-Kuo; Paxia, Salvatore

    2002-03-01

    We address the automatic generation of large geometric models. This is important in visualization for several reasons. First, many applications need access to large but interesting data models. Second, we often need such data sets with particular characteristics (e.g., urban models, park and recreation landscape). Thus we need the ability to generate models with different parameters. We propose a new approach for generating such models. It is based on a top-down propagation of statistical parameters. We illustrate the method in the generation of a statistical model of Manhattan. But the method is generally applicable in the generation of models of large geographical regions. Our work is related to the literature on generating complex natural scenes (smoke, forests, etc) based on procedural descriptions. The difference in our approach stems from three characteristics: modeling with statistical parameters, integration of ground truth (actual map data), and a library-based approach for texture mapping.

  7. Identifiability of PBPK Models with Applications to ...

    EPA Pesticide Factsheets

    Any statistical model should be identifiable in order for estimates and tests using it to be meaningful. We consider statistical analysis of physiologically-based pharmacokinetic (PBPK) models in which parameters cannot be estimated precisely from available data, and discuss different types of identifiability that occur in PBPK models and give reasons why they occur. We particularly focus on how the mathematical structure of a PBPK model and lack of appropriate data can lead to statistical models in which it is impossible to estimate at least some parameters precisely. Methods are reviewed which can determine whether a purely linear PBPK model is globally identifiable. We propose a theorem which determines when identifiability at a set of finite and specific values of the mathematical PBPK model (global discrete identifiability) implies identifiability of the statistical model. However, we are unable to establish conditions that imply global discrete identifiability, and conclude that the only safe approach to analysis of PBPK models involves Bayesian analysis with truncated priors. Finally, computational issues regarding posterior simulations of PBPK models are discussed. The methodology is very general and can be applied to numerous PBPK models which can be expressed as linear time-invariant systems. A real data set of a PBPK model for exposure to dimethyl arsinic acid (DMA(V)) is presented to illustrate the proposed methodology. We consider statistical analy

  8. Modeling Statistics of Fish Patchiness and Predicting Associated Influence on Statistics of Acoustic Echoes

    DTIC Science & Technology

    2013-09-30

    published 3-D multi-beam data. The Niwa and Anderson models were compared with 3-D multi-beam data collected by Paramo and Gerlotto. The data were...submitted, refereed] Bhatia, S., T.K. Stanton, J. Paramo , and F. Gerlotto (under revision), “Modeling statistics of fish school dimensions using 3-D

  9. Modeling Statistics of Fish Patchiness and Predicting Associated Influence on Statistics of Acoustic Echoes

    DTIC Science & Technology

    2013-09-30

    data. The Niwa and Anderson models were compared with 3-D multi-beam data collected by Paramo and Gerlotto. The data were consistent with the...Bhatia, S., T.K. Stanton, J. Paramo , and F. Gerlotto (under revision), “Modeling statistics of fish school dimensions using 3-D data from a

  10. Statistical Compression for Climate Model Output

    NASA Astrophysics Data System (ADS)

    Hammerling, D.; Guinness, J.; Soh, Y. J.

    2017-12-01

    Numerical climate model simulations run at high spatial and temporal resolutions generate massive quantities of data. As our computing capabilities continue to increase, storing all of the data is not sustainable, and thus is it important to develop methods for representing the full datasets by smaller compressed versions. We propose a statistical compression and decompression algorithm based on storing a set of summary statistics as well as a statistical model describing the conditional distribution of the full dataset given the summary statistics. We decompress the data by computing conditional expectations and conditional simulations from the model given the summary statistics. Conditional expectations represent our best estimate of the original data but are subject to oversmoothing in space and time. Conditional simulations introduce realistic small-scale noise so that the decompressed fields are neither too smooth nor too rough compared with the original data. Considerable attention is paid to accurately modeling the original dataset-one year of daily mean temperature data-particularly with regard to the inherent spatial nonstationarity in global fields, and to determining the statistics to be stored, so that the variation in the original data can be closely captured, while allowing for fast decompression and conditional emulation on modest computers.

  11. Crash Lethality Model

    DTIC Science & Technology

    2012-06-06

    Statistical Data ........................................................................................... 45 31 Parametric Model for Rotor Wing Debris...Area .............................................................. 46 32 Skid Distance Statistical Data...results. The curve that related the BC value to the probability of skull fracture resulted in a tight confidence interval and a two tailed statistical p

  12. A Stochastic Fractional Dynamics Model of Rainfall Statistics

    NASA Astrophysics Data System (ADS)

    Kundu, Prasun; Travis, James

    2013-04-01

    Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is designed to faithfully reflect the scale dependence and is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. The main restriction is the assumption that the statistics of the precipitation field is spatially homogeneous and isotropic and stationary in time. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of the radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment. Some data sets containing periods of non-stationary behavior that involves occasional anomalously correlated rain events, present a challenge for the model.

  13. Statistical Parameter Study of the Time Interval Distribution for Nonparalyzable, Paralyzable, and Hybrid Dead Time Models

    NASA Astrophysics Data System (ADS)

    Syam, Nur Syamsi; Maeng, Seongjin; Kim, Myo Gwang; Lim, Soo Yeon; Lee, Sang Hoon

    2018-05-01

    A large dead time of a Geiger Mueller (GM) detector may cause a large count loss in radiation measurements and consequently may cause distortion of the Poisson statistic of radiation events into a new distribution. The new distribution will have different statistical parameters compared to the original distribution. Therefore, the variance, skewness, and excess kurtosis in association with the observed count rate of the time interval distribution for well-known nonparalyzable, paralyzable, and nonparalyzable-paralyzable hybrid dead time models of a Geiger Mueller detector were studied using Monte Carlo simulation (GMSIM). These parameters were then compared with the statistical parameters of a perfect detector to observe the change in the distribution. The results show that the behaviors of the statistical parameters for the three dead time models were different. The values of the skewness and the excess kurtosis of the nonparalyzable model are equal or very close to those of the perfect detector, which are ≅2 for skewness, and ≅6 for excess kurtosis, while the statistical parameters in the paralyzable and hybrid model obtain minimum values that occur around the maximum observed count rates. The different trends of the three models resulting from the GMSIM simulation can be used to distinguish the dead time behavior of a GM counter; i.e. whether the GM counter can be described best by using the nonparalyzable, paralyzable, or hybrid model. In a future study, these statistical parameters need to be analyzed further to determine the possibility of using them to determine a dead time for each model, particularly for paralyzable and hybrid models.

  14. Statistical ecology comes of age.

    PubMed

    Gimenez, Olivier; Buckland, Stephen T; Morgan, Byron J T; Bez, Nicolas; Bertrand, Sophie; Choquet, Rémi; Dray, Stéphane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, Frédéric; Mérigot, Bastien; Monestiez, Pascal; Morales, Juan M; Mortier, Frédéric; Munoz, François; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric

    2014-12-01

    The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1-4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data.

  15. Statistical ecology comes of age

    PubMed Central

    Gimenez, Olivier; Buckland, Stephen T.; Morgan, Byron J. T.; Bez, Nicolas; Bertrand, Sophie; Choquet, Rémi; Dray, Stéphane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, Frédéric; Mérigot, Bastien; Monestiez, Pascal; Morales, Juan M.; Mortier, Frédéric; Munoz, François; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M.; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric

    2014-01-01

    The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1–4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data. PMID:25540151

  16. Statistical aspects of carbon fiber risk assessment modeling. [fire accidents involving aircraft

    NASA Technical Reports Server (NTRS)

    Gross, D.; Miller, D. R.; Soland, R. M.

    1980-01-01

    The probabilistic and statistical aspects of the carbon fiber risk assessment modeling of fire accidents involving commercial aircraft are examined. Three major sources of uncertainty in the modeling effort are identified. These are: (1) imprecise knowledge in establishing the model; (2) parameter estimation; and (3)Monte Carlo sampling error. All three sources of uncertainty are treated and statistical procedures are utilized and/or developed to control them wherever possible.

  17. A Statistical Approach For Modeling Tropical Cyclones. Synthetic Hurricanes Generator Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pasqualini, Donatella

    This manuscript brie y describes a statistical ap- proach to generate synthetic tropical cyclone tracks to be used in risk evaluations. The Synthetic Hur- ricane Generator (SynHurG) model allows model- ing hurricane risk in the United States supporting decision makers and implementations of adaptation strategies to extreme weather. In the literature there are mainly two approaches to model hurricane hazard for risk prediction: deterministic-statistical approaches, where the storm key physical parameters are calculated using physi- cal complex climate models and the tracks are usually determined statistically from historical data; and sta- tistical approaches, where both variables and tracks are estimatedmore » stochastically using historical records. SynHurG falls in the second category adopting a pure stochastic approach.« less

  18. Menzerath-Altmann Law: Statistical Mechanical Interpretation as Applied to a Linguistic Organization

    NASA Astrophysics Data System (ADS)

    Eroglu, Sertac

    2014-10-01

    The distribution behavior described by the empirical Menzerath-Altmann law is frequently encountered during the self-organization of linguistic and non-linguistic natural organizations at various structural levels. This study presents a statistical mechanical derivation of the law based on the analogy between the classical particles of a statistical mechanical organization and the distinct words of a textual organization. The derived model, a transformed (generalized) form of the Menzerath-Altmann model, was termed as the statistical mechanical Menzerath-Altmann model. The derived model allows interpreting the model parameters in terms of physical concepts. We also propose that many organizations presenting the Menzerath-Altmann law behavior, whether linguistic or not, can be methodically examined by the transformed distribution model through the properly defined structure-dependent parameter and the energy associated states.

  19. Exposure time independent summary statistics for assessment of drug dependent cell line growth inhibition.

    PubMed

    Falgreen, Steffen; Laursen, Maria Bach; Bødker, Julie Støve; Kjeldsen, Malene Krag; Schmitz, Alexander; Nyegaard, Mette; Johnsen, Hans Erik; Dybkær, Karen; Bøgsted, Martin

    2014-06-05

    In vitro generated dose-response curves of human cancer cell lines are widely used to develop new therapeutics. The curves are summarised by simplified statistics that ignore the conventionally used dose-response curves' dependency on drug exposure time and growth kinetics. This may lead to suboptimal exploitation of data and biased conclusions on the potential of the drug in question. Therefore we set out to improve the dose-response assessments by eliminating the impact of time dependency. First, a mathematical model for drug induced cell growth inhibition was formulated and used to derive novel dose-response curves and improved summary statistics that are independent of time under the proposed model. Next, a statistical analysis workflow for estimating the improved statistics was suggested consisting of 1) nonlinear regression models for estimation of cell counts and doubling times, 2) isotonic regression for modelling the suggested dose-response curves, and 3) resampling based method for assessing variation of the novel summary statistics. We document that conventionally used summary statistics for dose-response experiments depend on time so that fast growing cell lines compared to slowly growing ones are considered overly sensitive. The adequacy of the mathematical model is tested for doxorubicin and found to fit real data to an acceptable degree. Dose-response data from the NCI60 drug screen were used to illustrate the time dependency and demonstrate an adjustment correcting for it. The applicability of the workflow was illustrated by simulation and application on a doxorubicin growth inhibition screen. The simulations show that under the proposed mathematical model the suggested statistical workflow results in unbiased estimates of the time independent summary statistics. Variance estimates of the novel summary statistics are used to conclude that the doxorubicin screen covers a significant diverse range of responses ensuring it is useful for biological interpretations. Time independent summary statistics may aid the understanding of drugs' action mechanism on tumour cells and potentially renew previous drug sensitivity evaluation studies.

  20. Exposure time independent summary statistics for assessment of drug dependent cell line growth inhibition

    PubMed Central

    2014-01-01

    Background In vitro generated dose-response curves of human cancer cell lines are widely used to develop new therapeutics. The curves are summarised by simplified statistics that ignore the conventionally used dose-response curves’ dependency on drug exposure time and growth kinetics. This may lead to suboptimal exploitation of data and biased conclusions on the potential of the drug in question. Therefore we set out to improve the dose-response assessments by eliminating the impact of time dependency. Results First, a mathematical model for drug induced cell growth inhibition was formulated and used to derive novel dose-response curves and improved summary statistics that are independent of time under the proposed model. Next, a statistical analysis workflow for estimating the improved statistics was suggested consisting of 1) nonlinear regression models for estimation of cell counts and doubling times, 2) isotonic regression for modelling the suggested dose-response curves, and 3) resampling based method for assessing variation of the novel summary statistics. We document that conventionally used summary statistics for dose-response experiments depend on time so that fast growing cell lines compared to slowly growing ones are considered overly sensitive. The adequacy of the mathematical model is tested for doxorubicin and found to fit real data to an acceptable degree. Dose-response data from the NCI60 drug screen were used to illustrate the time dependency and demonstrate an adjustment correcting for it. The applicability of the workflow was illustrated by simulation and application on a doxorubicin growth inhibition screen. The simulations show that under the proposed mathematical model the suggested statistical workflow results in unbiased estimates of the time independent summary statistics. Variance estimates of the novel summary statistics are used to conclude that the doxorubicin screen covers a significant diverse range of responses ensuring it is useful for biological interpretations. Conclusion Time independent summary statistics may aid the understanding of drugs’ action mechanism on tumour cells and potentially renew previous drug sensitivity evaluation studies. PMID:24902483

  1. Rasch fit statistics and sample size considerations for polytomous data.

    PubMed

    Smith, Adam B; Rush, Robert; Fallowfield, Lesley J; Velikova, Galina; Sharpe, Michael

    2008-05-29

    Previous research on educational data has demonstrated that Rasch fit statistics (mean squares and t-statistics) are highly susceptible to sample size variation for dichotomously scored rating data, although little is known about this relationship for polytomous data. These statistics help inform researchers about how well items fit to a unidimensional latent trait, and are an important adjunct to modern psychometrics. Given the increasing use of Rasch models in health research the purpose of this study was therefore to explore the relationship between fit statistics and sample size for polytomous data. Data were collated from a heterogeneous sample of cancer patients (n = 4072) who had completed both the Patient Health Questionnaire - 9 and the Hospital Anxiety and Depression Scale. Ten samples were drawn with replacement for each of eight sample sizes (n = 25 to n = 3200). The Rating and Partial Credit Models were applied and the mean square and t-fit statistics (infit/outfit) derived for each model. The results demonstrated that t-statistics were highly sensitive to sample size, whereas mean square statistics remained relatively stable for polytomous data. It was concluded that mean square statistics were relatively independent of sample size for polytomous data and that misfit to the model could be identified using published recommended ranges.

  2. Rasch fit statistics and sample size considerations for polytomous data

    PubMed Central

    Smith, Adam B; Rush, Robert; Fallowfield, Lesley J; Velikova, Galina; Sharpe, Michael

    2008-01-01

    Background Previous research on educational data has demonstrated that Rasch fit statistics (mean squares and t-statistics) are highly susceptible to sample size variation for dichotomously scored rating data, although little is known about this relationship for polytomous data. These statistics help inform researchers about how well items fit to a unidimensional latent trait, and are an important adjunct to modern psychometrics. Given the increasing use of Rasch models in health research the purpose of this study was therefore to explore the relationship between fit statistics and sample size for polytomous data. Methods Data were collated from a heterogeneous sample of cancer patients (n = 4072) who had completed both the Patient Health Questionnaire – 9 and the Hospital Anxiety and Depression Scale. Ten samples were drawn with replacement for each of eight sample sizes (n = 25 to n = 3200). The Rating and Partial Credit Models were applied and the mean square and t-fit statistics (infit/outfit) derived for each model. Results The results demonstrated that t-statistics were highly sensitive to sample size, whereas mean square statistics remained relatively stable for polytomous data. Conclusion It was concluded that mean square statistics were relatively independent of sample size for polytomous data and that misfit to the model could be identified using published recommended ranges. PMID:18510722

  3. Making statistical inferences about software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1988-01-01

    Failure times of software undergoing random debugging can be modelled as order statistics of independent but nonidentically distributed exponential random variables. Using this model inferences can be made about current reliability and, if debugging continues, future reliability. This model also shows the difficulty inherent in statistical verification of very highly reliable software such as that used by digital avionics in commercial aircraft.

  4. Using the Expectancy Value Model of Motivation to Understand the Relationship between Student Attitudes and Achievement in Statistics

    ERIC Educational Resources Information Center

    Hood, Michelle; Creed, Peter A.; Neumann, David L.

    2012-01-01

    We tested a model of the relationship between attitudes toward statistics and achievement based on Eccles' Expectancy Value Model (1983). Participants (n = 149; 83% female) were second-year Australian university students in a psychology statistics course (mean age = 23.36 years, SD = 7.94 years). We obtained demographic details, past performance,…

  5. A consistent framework for Horton regression statistics that leads to a modified Hack's law

    USGS Publications Warehouse

    Furey, P.R.; Troutman, B.M.

    2008-01-01

    A statistical framework is introduced that resolves important problems with the interpretation and use of traditional Horton regression statistics. The framework is based on a univariate regression model that leads to an alternative expression for Horton ratio, connects Horton regression statistics to distributional simple scaling, and improves the accuracy in estimating Horton plot parameters. The model is used to examine data for drainage area A and mainstream length L from two groups of basins located in different physiographic settings. Results show that confidence intervals for the Horton plot regression statistics are quite wide. Nonetheless, an analysis of covariance shows that regression intercepts, but not regression slopes, can be used to distinguish between basin groups. The univariate model is generalized to include n > 1 dependent variables. For the case where the dependent variables represent ln A and ln L, the generalized model performs somewhat better at distinguishing between basin groups than two separate univariate models. The generalized model leads to a modification of Hack's law where L depends on both A and Strahler order ??. Data show that ?? plays a statistically significant role in the modified Hack's law expression. ?? 2008 Elsevier B.V.

  6. The statistical average of optical properties for alumina particle cluster in aircraft plume

    NASA Astrophysics Data System (ADS)

    Li, Jingying; Bai, Lu; Wu, Zhensen; Guo, Lixin

    2018-04-01

    We establish a model for lognormal distribution of monomer radius and number of alumina particle clusters in plume. According to the Multi-Sphere T Matrix (MSTM) theory, we provide a method for finding the statistical average of optical properties for alumina particle clusters in plume, analyze the effect of different distributions and different detection wavelengths on the statistical average of optical properties for alumina particle cluster, and compare the statistical average optical properties under the alumina particle cluster model established in this study and those under three simplified alumina particle models. The calculation results show that the monomer number of alumina particle cluster and its size distribution have a considerable effect on its statistical average optical properties. The statistical average of optical properties for alumina particle cluster at common detection wavelengths exhibit obvious differences, whose differences have a great effect on modeling IR and UV radiation properties of plume. Compared with the three simplified models, the alumina particle cluster model herein features both higher extinction and scattering efficiencies. Therefore, we may find that an accurate description of the scattering properties of alumina particles in aircraft plume is of great significance in the study of plume radiation properties.

  7. Comparisons between physics-based, engineering, and statistical learning models for outdoor sound propagation.

    PubMed

    Hart, Carl R; Reznicek, Nathan J; Wilson, D Keith; Pettit, Chris L; Nykaza, Edward T

    2016-05-01

    Many outdoor sound propagation models exist, ranging from highly complex physics-based simulations to simplified engineering calculations, and more recently, highly flexible statistical learning methods. Several engineering and statistical learning models are evaluated by using a particular physics-based model, namely, a Crank-Nicholson parabolic equation (CNPE), as a benchmark. Narrowband transmission loss values predicted with the CNPE, based upon a simulated data set of meteorological, boundary, and source conditions, act as simulated observations. In the simulated data set sound propagation conditions span from downward refracting to upward refracting, for acoustically hard and soft boundaries, and low frequencies. Engineering models used in the comparisons include the ISO 9613-2 method, Harmonoise, and Nord2000 propagation models. Statistical learning methods used in the comparisons include bagged decision tree regression, random forest regression, boosting regression, and artificial neural network models. Computed skill scores are relative to sound propagation in a homogeneous atmosphere over a rigid ground. Overall skill scores for the engineering noise models are 0.6%, -7.1%, and 83.8% for the ISO 9613-2, Harmonoise, and Nord2000 models, respectively. Overall skill scores for the statistical learning models are 99.5%, 99.5%, 99.6%, and 99.6% for bagged decision tree, random forest, boosting, and artificial neural network regression models, respectively.

  8. Watershed Regressions for Pesticides (WARP) models for predicting stream concentrations of multiple pesticides

    USGS Publications Warehouse

    Stone, Wesley W.; Crawford, Charles G.; Gilliom, Robert J.

    2013-01-01

    Watershed Regressions for Pesticides for multiple pesticides (WARP-MP) are statistical models developed to predict concentration statistics for a wide range of pesticides in unmonitored streams. The WARP-MP models use the national atrazine WARP models in conjunction with an adjustment factor for each additional pesticide. The WARP-MP models perform best for pesticides with application timing and methods similar to those used with atrazine. For other pesticides, WARP-MP models tend to overpredict concentration statistics for the model development sites. For WARP and WARP-MP, the less-than-ideal sampling frequency for the model development sites leads to underestimation of the shorter-duration concentration; hence, the WARP models tend to underpredict 4- and 21-d maximum moving-average concentrations, with median errors ranging from 9 to 38% As a result of this sampling bias, pesticides that performed well with the model development sites are expected to have predictions that are biased low for these shorter-duration concentration statistics. The overprediction by WARP-MP apparent for some of the pesticides is variably offset by underestimation of the model development concentration statistics. Of the 112 pesticides used in the WARP-MP application to stream segments nationwide, 25 were predicted to have concentration statistics with a 50% or greater probability of exceeding one or more aquatic life benchmarks in one or more stream segments. Geographically, many of the modeled streams in the Corn Belt Region were predicted to have one or more pesticides that exceeded an aquatic life benchmark during 2009, indicating the potential vulnerability of streams in this region.

  9. An adaptive state of charge estimation approach for lithium-ion series-connected battery system

    NASA Astrophysics Data System (ADS)

    Peng, Simin; Zhu, Xuelai; Xing, Yinjiao; Shi, Hongbing; Cai, Xu; Pecht, Michael

    2018-07-01

    Due to the incorrect or unknown noise statistics of a battery system and its cell-to-cell variations, state of charge (SOC) estimation of a lithium-ion series-connected battery system is usually inaccurate or even divergent using model-based methods, such as extended Kalman filter (EKF) and unscented Kalman filter (UKF). To resolve this problem, an adaptive unscented Kalman filter (AUKF) based on a noise statistics estimator and a model parameter regulator is developed to accurately estimate the SOC of a series-connected battery system. An equivalent circuit model is first built based on the model parameter regulator that illustrates the influence of cell-to-cell variation on the battery system. A noise statistics estimator is then used to attain adaptively the estimated noise statistics for the AUKF when its prior noise statistics are not accurate or exactly Gaussian. The accuracy and effectiveness of the SOC estimation method is validated by comparing the developed AUKF and UKF when model and measurement statistics noises are inaccurate, respectively. Compared with the UKF and EKF, the developed method shows the highest SOC estimation accuracy.

  10. A scan statistic for binary outcome based on hypergeometric probability model, with an application to detecting spatial clusters of Japanese encephalitis.

    PubMed

    Zhao, Xing; Zhou, Xiao-Hua; Feng, Zijian; Guo, Pengfei; He, Hongyan; Zhang, Tao; Duan, Lei; Li, Xiaosong

    2013-01-01

    As a useful tool for geographical cluster detection of events, the spatial scan statistic is widely applied in many fields and plays an increasingly important role. The classic version of the spatial scan statistic for the binary outcome is developed by Kulldorff, based on the Bernoulli or the Poisson probability model. In this paper, we apply the Hypergeometric probability model to construct the likelihood function under the null hypothesis. Compared with existing methods, the likelihood function under the null hypothesis is an alternative and indirect method to identify the potential cluster, and the test statistic is the extreme value of the likelihood function. Similar with Kulldorff's methods, we adopt Monte Carlo test for the test of significance. Both methods are applied for detecting spatial clusters of Japanese encephalitis in Sichuan province, China, in 2009, and the detected clusters are identical. Through a simulation to independent benchmark data, it is indicated that the test statistic based on the Hypergeometric model outweighs Kulldorff's statistics for clusters of high population density or large size; otherwise Kulldorff's statistics are superior.

  11. Statistical Reform in School Psychology Research: A Synthesis

    ERIC Educational Resources Information Center

    Swaminathan, Hariharan; Rogers, H. Jane

    2007-01-01

    Statistical reform in school psychology research is discussed in terms of research designs, measurement issues, statistical modeling and analysis procedures, interpretation and reporting of statistical results, and finally statistics education.

  12. Nonelastic nuclear reactions and accompanying gamma radiation

    NASA Technical Reports Server (NTRS)

    Snow, R.; Rosner, H. R.; George, M. C.; Hayes, J. D.

    1971-01-01

    Several aspects of nonelastic nuclear reactions which proceed through the formation of a compound nucleus are dealt with. The full statistical model and the partial statistical model are described and computer programs based on these models are presented along with operating instructions and input and output for sample problems. A theoretical development of the expression for the reaction cross section for the hybrid case which involves a combination of the continuum aspects of the full statistical model with the discrete level aspects of the partial statistical model is presented. Cross sections for level excitation and gamma production by neutron inelastic scattering from the nuclei Al-27, Fe-56, Si-28, and Pb-208 are calculated and compared with avaliable experimental data.

  13. Multiple commodities in statistical microeconomics: Model and market

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Yu, Miao; Du, Xin

    2016-11-01

    A statistical generalization of microeconomics has been made in Baaquie (2013). In Baaquie et al. (2015), the market behavior of single commodities was analyzed and it was shown that market data provides strong support for the statistical microeconomic description of commodity prices. The case of multiple commodities is studied and a parsimonious generalization of the single commodity model is made for the multiple commodities case. Market data shows that the generalization can accurately model the simultaneous correlation functions of up to four commodities. To accurately model five or more commodities, further terms have to be included in the model. This study shows that the statistical microeconomics approach is a comprehensive and complete formulation of microeconomics, and which is independent to the mainstream formulation of microeconomics.

  14. Statistical model specification and power: recommendations on the use of test-qualified pooling in analysis of experimental data

    PubMed Central

    Colegrave, Nick

    2017-01-01

    A common approach to the analysis of experimental data across much of the biological sciences is test-qualified pooling. Here non-significant terms are dropped from a statistical model, effectively pooling the variation associated with each removed term with the error term used to test hypotheses (or estimate effect sizes). This pooling is only carried out if statistical testing on the basis of applying that data to a previous more complicated model provides motivation for this model simplification; hence the pooling is test-qualified. In pooling, the researcher increases the degrees of freedom of the error term with the aim of increasing statistical power to test their hypotheses of interest. Despite this approach being widely adopted and explicitly recommended by some of the most widely cited statistical textbooks aimed at biologists, here we argue that (except in highly specialized circumstances that we can identify) the hoped-for improvement in statistical power will be small or non-existent, and there is likely to be much reduced reliability of the statistical procedures through deviation of type I error rates from nominal levels. We thus call for greatly reduced use of test-qualified pooling across experimental biology, more careful justification of any use that continues, and a different philosophy for initial selection of statistical models in the light of this change in procedure. PMID:28330912

  15. A statistical model of operational impacts on the framework of the bridge crane

    NASA Astrophysics Data System (ADS)

    Antsev, V. Yu; Tolokonnikov, A. S.; Gorynin, A. D.; Reutov, A. A.

    2017-02-01

    The technical regulations of the Customs Union demands implementation of the risk analysis of the bridge cranes operation at their design stage. The statistical model has been developed for performance of random calculations of risks, allowing us to model possible operational influences on the bridge crane metal structure in their various combination. The statistical model is practically actualized in the software product automated calculation of risks of failure occurrence of bridge cranes.

  16. Statistical learning and probabilistic prediction in music cognition: mechanisms of stylistic enculturation.

    PubMed

    Pearce, Marcus T

    2018-05-11

    Music perception depends on internal psychological models derived through exposure to a musical culture. It is hypothesized that this musical enculturation depends on two cognitive processes: (1) statistical learning, in which listeners acquire internal cognitive models of statistical regularities present in the music to which they are exposed; and (2) probabilistic prediction based on these learned models that enables listeners to organize and process their mental representations of music. To corroborate these hypotheses, I review research that uses a computational model of probabilistic prediction based on statistical learning (the information dynamics of music (IDyOM) model) to simulate data from empirical studies of human listeners. The results show that a broad range of psychological processes involved in music perception-expectation, emotion, memory, similarity, segmentation, and meter-can be understood in terms of a single, underlying process of probabilistic prediction using learned statistical models. Furthermore, IDyOM simulations of listeners from different musical cultures demonstrate that statistical learning can plausibly predict causal effects of differential cultural exposure to musical styles, providing a quantitative model of cultural distance. Understanding the neural basis of musical enculturation will benefit from close coordination between empirical neuroimaging and computational modeling of underlying mechanisms, as outlined here. © 2018 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of New York Academy of Sciences.

  17. Statistical analysis of water-quality data containing multiple detection limits: S-language software for regression on order statistics

    USGS Publications Warehouse

    Lee, L.; Helsel, D.

    2005-01-01

    Trace contaminants in water, including metals and organics, often are measured at sufficiently low concentrations to be reported only as values below the instrument detection limit. Interpretation of these "less thans" is complicated when multiple detection limits occur. Statistical methods for multiply censored, or multiple-detection limit, datasets have been developed for medical and industrial statistics, and can be employed to estimate summary statistics or model the distributions of trace-level environmental data. We describe S-language-based software tools that perform robust linear regression on order statistics (ROS). The ROS method has been evaluated as one of the most reliable procedures for developing summary statistics of multiply censored data. It is applicable to any dataset that has 0 to 80% of its values censored. These tools are a part of a software library, or add-on package, for the R environment for statistical computing. This library can be used to generate ROS models and associated summary statistics, plot modeled distributions, and predict exceedance probabilities of water-quality standards. ?? 2005 Elsevier Ltd. All rights reserved.

  18. Spatial Statistical Network Models for Stream and River Temperature in the Chesapeake Bay Watershed, USA

    EPA Science Inventory

    Regional temperature models are needed for characterizing and mapping stream thermal regimes, establishing reference conditions, predicting future impacts and identifying critical thermal refugia. Spatial statistical models have been developed to improve regression modeling techn...

  19. Statistical models and NMR analysis of polymer microstructure

    USDA-ARS?s Scientific Manuscript database

    Statistical models can be used in conjunction with NMR spectroscopy to study polymer microstructure and polymerization mechanisms. Thus, Bernoullian, Markovian, and enantiomorphic-site models are well known. Many additional models have been formulated over the years for additional situations. Typica...

  20. Use of statistical and neural net approaches in predicting toxicity of chemicals.

    PubMed

    Basak, S C; Grunwald, G D; Gute, B D; Balasubramanian, K; Opitz, D

    2000-01-01

    Hierarchical quantitative structure-activity relationships (H-QSAR) have been developed as a new approach in constructing models for estimating physicochemical, biomedicinal, and toxicological properties of interest. This approach uses increasingly more complex molecular descriptors in a graduated approach to model building. In this study, statistical and neural network methods have been applied to the development of H-QSAR models for estimating the acute aquatic toxicity (LC50) of 69 benzene derivatives to Pimephales promelas (fathead minnow). Topostructural, topochemical, geometrical, and quantum chemical indices were used as the four levels of the hierarchical method. It is clear from both the statistical and neural network models that topostructural indices alone cannot adequately model this set of congeneric chemicals. Not surprisingly, topochemical indices greatly increase the predictive power of both statistical and neural network models. Quantum chemical indices also add significantly to the modeling of this set of acute aquatic toxicity data.

  1. Fast, Statistical Model of Surface Roughness for Ion-Solid Interaction Simulations and Efficient Code Coupling

    NASA Astrophysics Data System (ADS)

    Drobny, Jon; Curreli, Davide; Ruzic, David; Lasa, Ane; Green, David; Canik, John; Younkin, Tim; Blondel, Sophie; Wirth, Brian

    2017-10-01

    Surface roughness greatly impacts material erosion, and thus plays an important role in Plasma-Surface Interactions. Developing strategies for efficiently introducing rough surfaces into ion-solid interaction codes will be an important step towards whole-device modeling of plasma devices and future fusion reactors such as ITER. Fractal TRIDYN (F-TRIDYN) is an upgraded version of the Monte Carlo, BCA program TRIDYN developed for this purpose that includes an explicit fractal model of surface roughness and extended input and output options for file-based code coupling. Code coupling with both plasma and material codes has been achieved and allows for multi-scale, whole-device modeling of plasma experiments. These code coupling results will be presented. F-TRIDYN has been further upgraded with an alternative, statistical model of surface roughness. The statistical model is significantly faster than and compares favorably to the fractal model. Additionally, the statistical model compares well to alternative computational surface roughness models and experiments. Theoretical links between the fractal and statistical models are made, and further connections to experimental measurements of surface roughness are explored. This work was supported by the PSI-SciDAC Project funded by the U.S. Department of Energy through contract DOE-DE-SC0008658.

  2. Comparison of the predictive validity of diagnosis-based risk adjusters for clinical outcomes.

    PubMed

    Petersen, Laura A; Pietz, Kenneth; Woodard, LeChauncy D; Byrne, Margaret

    2005-01-01

    Many possible methods of risk adjustment exist, but there is a dearth of comparative data on their performance. We compared the predictive validity of 2 widely used methods (Diagnostic Cost Groups [DCGs] and Adjusted Clinical Groups [ACGs]) for 2 clinical outcomes using a large national sample of patients. We studied all patients who used Veterans Health Administration (VA) medical services in fiscal year (FY) 2001 (n = 3,069,168) and assigned both a DCG and an ACG to each. We used logistic regression analyses to compare predictive ability for death or long-term care (LTC) hospitalization for age/gender models, DCG models, and ACG models. We also assessed the effect of adding age to the DCG and ACG models. Patients in the highest DCG categories, indicating higher severity of illness, were more likely to die or to require LTC hospitalization. Surprisingly, the age/gender model predicted death slightly more accurately than the ACG model (c-statistic of 0.710 versus 0.700, respectively). The addition of age to the ACG model improved the c-statistic to 0.768. The highest c-statistic for prediction of death was obtained with a DCG/age model (0.830). The lowest c-statistics were obtained for age/gender models for LTC hospitalization (c-statistic 0.593). The c-statistic for use of ACGs to predict LTC hospitalization was 0.783, and improved to 0.792 with the addition of age. The c-statistics for use of DCGs and DCG/age to predict LTC hospitalization were 0.885 and 0.890, respectively, indicating the best prediction. We found that risk adjusters based upon diagnoses predicted an increased likelihood of death or LTC hospitalization, exhibiting good predictive validity. In this comparative analysis using VA data, DCG models were generally superior to ACG models in predicting clinical outcomes, although ACG model performance was enhanced by the addition of age.

  3. Results of the Verification of the Statistical Distribution Model of Microseismicity Emission Characteristics

    NASA Astrophysics Data System (ADS)

    Cianciara, Aleksander

    2016-09-01

    The paper presents the results of research aimed at verifying the hypothesis that the Weibull distribution is an appropriate statistical distribution model of microseismicity emission characteristics, namely: energy of phenomena and inter-event time. It is understood that the emission under consideration is induced by the natural rock mass fracturing. Because the recorded emission contain noise, therefore, it is subjected to an appropriate filtering. The study has been conducted using the method of statistical verification of null hypothesis that the Weibull distribution fits the empirical cumulative distribution function. As the model describing the cumulative distribution function is given in an analytical form, its verification may be performed using the Kolmogorov-Smirnov goodness-of-fit test. Interpretations by means of probabilistic methods require specifying the correct model describing the statistical distribution of data. Because in these methods measurement data are not used directly, but their statistical distributions, e.g., in the method based on the hazard analysis, or in that that uses maximum value statistics.

  4. Statistical Surrogate Modeling of Atmospheric Dispersion Events Using Bayesian Adaptive Splines

    NASA Astrophysics Data System (ADS)

    Francom, D.; Sansó, B.; Bulaevskaya, V.; Lucas, D. D.

    2016-12-01

    Uncertainty in the inputs of complex computer models, including atmospheric dispersion and transport codes, is often assessed via statistical surrogate models. Surrogate models are computationally efficient statistical approximations of expensive computer models that enable uncertainty analysis. We introduce Bayesian adaptive spline methods for producing surrogate models that capture the major spatiotemporal patterns of the parent model, while satisfying all the necessities of flexibility, accuracy and computational feasibility. We present novel methodological and computational approaches motivated by a controlled atmospheric tracer release experiment conducted at the Diablo Canyon nuclear power plant in California. Traditional methods for building statistical surrogate models often do not scale well to experiments with large amounts of data. Our approach is well suited to experiments involving large numbers of model inputs, large numbers of simulations, and functional output for each simulation. Our approach allows us to perform global sensitivity analysis with ease. We also present an approach to calibration of simulators using field data.

  5. Modeling Soot Oxidation and Gasification with Bayesian Statistics

    DOE PAGES

    Josephson, Alexander J.; Gaffin, Neal D.; Smith, Sean T.; ...

    2017-08-22

    This paper presents a statistical method for model calibration using data collected from literature. The method is used to calibrate parameters for global models of soot consumption in combustion systems. This consumption is broken into two different submodels: first for oxidation where soot particles are attacked by certain oxidizing agents; second for gasification where soot particles are attacked by H 2O or CO 2 molecules. Rate data were collected from 19 studies in the literature and evaluated using Bayesian statistics to calibrate the model parameters. Bayesian statistics are valued in their ability to quantify uncertainty in modeling. The calibrated consumptionmore » model with quantified uncertainty is presented here along with a discussion of associated implications. The oxidation results are found to be consistent with previous studies. Significant variation is found in the CO 2 gasification rates.« less

  6. Modeling Soot Oxidation and Gasification with Bayesian Statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Josephson, Alexander J.; Gaffin, Neal D.; Smith, Sean T.

    This paper presents a statistical method for model calibration using data collected from literature. The method is used to calibrate parameters for global models of soot consumption in combustion systems. This consumption is broken into two different submodels: first for oxidation where soot particles are attacked by certain oxidizing agents; second for gasification where soot particles are attacked by H 2O or CO 2 molecules. Rate data were collected from 19 studies in the literature and evaluated using Bayesian statistics to calibrate the model parameters. Bayesian statistics are valued in their ability to quantify uncertainty in modeling. The calibrated consumptionmore » model with quantified uncertainty is presented here along with a discussion of associated implications. The oxidation results are found to be consistent with previous studies. Significant variation is found in the CO 2 gasification rates.« less

  7. Development and evaluation of statistical shape modeling for principal inner organs on torso CT images.

    PubMed

    Zhou, Xiangrong; Xu, Rui; Hara, Takeshi; Hirano, Yasushi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Hoshi, Hiroaki; Kido, Shoji; Fujita, Hiroshi

    2014-07-01

    The shapes of the inner organs are important information for medical image analysis. Statistical shape modeling provides a way of quantifying and measuring shape variations of the inner organs in different patients. In this study, we developed a universal scheme that can be used for building the statistical shape models for different inner organs efficiently. This scheme combines the traditional point distribution modeling with a group-wise optimization method based on a measure called minimum description length to provide a practical means for 3D organ shape modeling. In experiments, the proposed scheme was applied to the building of five statistical shape models for hearts, livers, spleens, and right and left kidneys by use of 50 cases of 3D torso CT images. The performance of these models was evaluated by three measures: model compactness, model generalization, and model specificity. The experimental results showed that the constructed shape models have good "compactness" and satisfied the "generalization" performance for different organ shape representations; however, the "specificity" of these models should be improved in the future.

  8. Statistical Design Model (SDM) of satellite thermal control subsystem

    NASA Astrophysics Data System (ADS)

    Mirshams, Mehran; Zabihian, Ehsan; Aarabi Chamalishahi, Mahdi

    2016-07-01

    Satellites thermal control, is a satellite subsystem that its main task is keeping the satellite components at its own survival and activity temperatures. Ability of satellite thermal control plays a key role in satisfying satellite's operational requirements and designing this subsystem is a part of satellite design. In the other hand due to the lack of information provided by companies and designers still doesn't have a specific design process while it is one of the fundamental subsystems. The aim of this paper, is to identify and extract statistical design models of spacecraft thermal control subsystem by using SDM design method. This method analyses statistical data with a particular procedure. To implement SDM method, a complete database is required. Therefore, we first collect spacecraft data and create a database, and then we extract statistical graphs using Microsoft Excel, from which we further extract mathematical models. Inputs parameters of the method are mass, mission, and life time of the satellite. For this purpose at first thermal control subsystem has been introduced and hardware using in the this subsystem and its variants has been investigated. In the next part different statistical models has been mentioned and a brief compare will be between them. Finally, this paper particular statistical model is extracted from collected statistical data. Process of testing the accuracy and verifying the method use a case study. Which by the comparisons between the specifications of thermal control subsystem of a fabricated satellite and the analyses results, the methodology in this paper was proved to be effective. Key Words: Thermal control subsystem design, Statistical design model (SDM), Satellite conceptual design, Thermal hardware

  9. Assessing risk factors for dental caries: a statistical modeling approach.

    PubMed

    Trottini, Mario; Bossù, Maurizio; Corridore, Denise; Ierardo, Gaetano; Luzzi, Valeria; Saccucci, Matteo; Polimeni, Antonella

    2015-01-01

    The problem of identifying potential determinants and predictors of dental caries is of key importance in caries research and it has received considerable attention in the scientific literature. From the methodological side, a broad range of statistical models is currently available to analyze dental caries indices (DMFT, dmfs, etc.). These models have been applied in several studies to investigate the impact of different risk factors on the cumulative severity of dental caries experience. However, in most of the cases (i) these studies focus on a very specific subset of risk factors; and (ii) in the statistical modeling only few candidate models are considered and model selection is at best only marginally addressed. As a result, our understanding of the robustness of the statistical inferences with respect to the choice of the model is very limited; the richness of the set of statistical models available for analysis in only marginally exploited; and inferences could be biased due the omission of potentially important confounding variables in the model's specification. In this paper we argue that these limitations can be overcome considering a general class of candidate models and carefully exploring the model space using standard model selection criteria and measures of global fit and predictive performance of the candidate models. Strengths and limitations of the proposed approach are illustrated with a real data set. In our illustration the model space contains more than 2.6 million models, which require inferences to be adjusted for 'optimism'.

  10. Advances in statistics

    Treesearch

    Howard Stauffer; Nadav Nur

    2005-01-01

    The papers included in the Advances in Statistics section of the Partners in Flight (PIF) 2002 Proceedings represent a small sample of statistical topics of current importance to Partners In Flight research scientists: hierarchical modeling, estimation of detection probabilities, and Bayesian applications. Sauer et al. (this volume) examines a hierarchical model...

  11. Probability density function shape sensitivity in the statistical modeling of turbulent particle dispersion

    NASA Technical Reports Server (NTRS)

    Litchford, Ron J.; Jeng, San-Mou

    1992-01-01

    The performance of a recently introduced statistical transport model for turbulent particle dispersion is studied here for rigid particles injected into a round turbulent jet. Both uniform and isosceles triangle pdfs are used. The statistical sensitivity to parcel pdf shape is demonstrated.

  12. A d-statistic for single-case designs that is equivalent to the usual between-groups d-statistic.

    PubMed

    Shadish, William R; Hedges, Larry V; Pustejovsky, James E; Boyajian, Jonathan G; Sullivan, Kristynn J; Andrade, Alma; Barrientos, Jeannette L

    2014-01-01

    We describe a standardised mean difference statistic (d) for single-case designs that is equivalent to the usual d in between-groups experiments. We show how it can be used to summarise treatment effects over cases within a study, to do power analyses in planning new studies and grant proposals, and to meta-analyse effects across studies of the same question. We discuss limitations of this d-statistic, and possible remedies to them. Even so, this d-statistic is better founded statistically than other effect size measures for single-case design, and unlike many general linear model approaches such as multilevel modelling or generalised additive models, it produces a standardised effect size that can be integrated over studies with different outcome measures. SPSS macros for both effect size computation and power analysis are available.

  13. Applications of spatial statistical network models to stream data

    USGS Publications Warehouse

    Isaak, Daniel J.; Peterson, Erin E.; Ver Hoef, Jay M.; Wenger, Seth J.; Falke, Jeffrey A.; Torgersen, Christian E.; Sowder, Colin; Steel, E. Ashley; Fortin, Marie-Josée; Jordan, Chris E.; Ruesch, Aaron S.; Som, Nicholas; Monestiez, Pascal

    2014-01-01

    Streams and rivers host a significant portion of Earth's biodiversity and provide important ecosystem services for human populations. Accurate information regarding the status and trends of stream resources is vital for their effective conservation and management. Most statistical techniques applied to data measured on stream networks were developed for terrestrial applications and are not optimized for streams. A new class of spatial statistical model, based on valid covariance structures for stream networks, can be used with many common types of stream data (e.g., water quality attributes, habitat conditions, biological surveys) through application of appropriate distributions (e.g., Gaussian, binomial, Poisson). The spatial statistical network models account for spatial autocorrelation (i.e., nonindependence) among measurements, which allows their application to databases with clustered measurement locations. Large amounts of stream data exist in many areas where spatial statistical analyses could be used to develop novel insights, improve predictions at unsampled sites, and aid in the design of efficient monitoring strategies at relatively low cost. We review the topic of spatial autocorrelation and its effects on statistical inference, demonstrate the use of spatial statistics with stream datasets relevant to common research and management questions, and discuss additional applications and development potential for spatial statistics on stream networks. Free software for implementing the spatial statistical network models has been developed that enables custom applications with many stream databases.

  14. Grain-Size Based Additivity Models for Scaling Multi-rate Uranyl Surface Complexation in Subsurface Sediments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.

    This study statistically analyzed a grain-size based additivity model that has been proposed to scale reaction rates and parameters from laboratory to field. The additivity model assumed that reaction properties in a sediment including surface area, reactive site concentration, reaction rate, and extent can be predicted from field-scale grain size distribution by linearly adding reaction properties for individual grain size fractions. This study focused on the statistical analysis of the additivity model with respect to reaction rate constants using multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment as an example. Experimental data of rate-limited U(VI) desorption in amore » stirred flow-cell reactor were used to estimate the statistical properties of multi-rate parameters for individual grain size fractions. The statistical properties of the rate constants for the individual grain size fractions were then used to analyze the statistical properties of the additivity model to predict rate-limited U(VI) desorption in the composite sediment, and to evaluate the relative importance of individual grain size fractions to the overall U(VI) desorption. The result indicated that the additivity model provided a good prediction of the U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model, and U(VI) desorption in individual grain size fractions have to be simulated in order to apply the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel size fraction (2-8mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.« less

  15. Statistical considerations on prognostic models for glioma

    PubMed Central

    Molinaro, Annette M.; Wrensch, Margaret R.; Jenkins, Robert B.; Eckel-Passow, Jeanette E.

    2016-01-01

    Given the lack of beneficial treatments in glioma, there is a need for prognostic models for therapeutic decision making and life planning. Recently several studies defining subtypes of glioma have been published. Here, we review the statistical considerations of how to build and validate prognostic models, explain the models presented in the current glioma literature, and discuss advantages and disadvantages of each model. The 3 statistical considerations to establishing clinically useful prognostic models are: study design, model building, and validation. Careful study design helps to ensure that the model is unbiased and generalizable to the population of interest. During model building, a discovery cohort of patients can be used to choose variables, construct models, and estimate prediction performance via internal validation. Via external validation, an independent dataset can assess how well the model performs. It is imperative that published models properly detail the study design and methods for both model building and validation. This provides readers the information necessary to assess the bias in a study, compare other published models, and determine the model's clinical usefulness. As editors, reviewers, and readers of the relevant literature, we should be cognizant of the needed statistical considerations and insist on their use. PMID:26657835

  16. Flexible statistical modelling detects clinical functional magnetic resonance imaging activation in partially compliant subjects.

    PubMed

    Waites, Anthony B; Mannfolk, Peter; Shaw, Marnie E; Olsrud, Johan; Jackson, Graeme D

    2007-02-01

    Clinical functional magnetic resonance imaging (fMRI) occasionally fails to detect significant activation, often due to variability in task performance. The present study seeks to test whether a more flexible statistical analysis can better detect activation, by accounting for variance associated with variable compliance to the task over time. Experimental results and simulated data both confirm that even at 80% compliance to the task, such a flexible model outperforms standard statistical analysis when assessed using the extent of activation (experimental data), goodness of fit (experimental data), and area under the operator characteristic curve (simulated data). Furthermore, retrospective examination of 14 clinical fMRI examinations reveals that in patients where the standard statistical approach yields activation, there is a measurable gain in model performance in adopting the flexible statistical model, with little or no penalty in lost sensitivity. This indicates that a flexible model should be considered, particularly for clinical patients who may have difficulty complying fully with the study task.

  17. Combining Statistics and Physics to Improve Climate Downscaling

    NASA Astrophysics Data System (ADS)

    Gutmann, E. D.; Eidhammer, T.; Arnold, J.; Nowak, K.; Clark, M. P.

    2017-12-01

    Getting useful information from climate models is an ongoing problem that has plagued climate science and hydrologic prediction for decades. While it is possible to develop statistical corrections for climate models that mimic current climate almost perfectly, this does not necessarily guarantee that future changes are portrayed correctly. In contrast, convection permitting regional climate models (RCMs) have begun to provide an excellent representation of the regional climate system purely from first principles, providing greater confidence in their change signal. However, the computational cost of such RCMs prohibits the generation of ensembles of simulations or long time periods, thus limiting their applicability for hydrologic applications. Here we discuss a new approach combining statistical corrections with physical relationships for a modest computational cost. We have developed the Intermediate Complexity Atmospheric Research model (ICAR) to provide a climate and weather downscaling option that is based primarily on physics for a fraction of the computational requirements of a traditional regional climate model. ICAR also enables the incorporation of statistical adjustments directly within the model. We demonstrate that applying even simple corrections to precipitation while the model is running can improve the simulation of land atmosphere feedbacks in ICAR. For example, by incorporating statistical corrections earlier in the modeling chain, we permit the model physics to better represent the effect of mountain snowpack on air temperature changes.

  18. Linearised and non-linearised isotherm models optimization analysis by error functions and statistical means

    PubMed Central

    2014-01-01

    In adsorption study, to describe sorption process and evaluation of best-fitting isotherm model is a key analysis to investigate the theoretical hypothesis. Hence, numerous statistically analysis have been extensively used to estimate validity of the experimental equilibrium adsorption values with the predicted equilibrium values. Several statistical error analysis were carried out. In the present study, the following statistical analysis were carried out to evaluate the adsorption isotherm model fitness, like the Pearson correlation, the coefficient of determination and the Chi-square test, have been used. The ANOVA test was carried out for evaluating significance of various error functions and also coefficient of dispersion were evaluated for linearised and non-linearised models. The adsorption of phenol onto natural soil (Local name Kalathur soil) was carried out, in batch mode at 30 ± 20 C. For estimating the isotherm parameters, to get a holistic view of the analysis the models were compared between linear and non-linear isotherm models. The result reveled that, among above mentioned error functions and statistical functions were designed to determine the best fitting isotherm. PMID:25018878

  19. Moment-Based Physical Models of Broadband Clutter due to Aggregations of Fish

    DTIC Science & Technology

    2013-09-30

    statistical models for signal-processing algorithm development. These in turn will help to develop a capability to statistically forecast the impact of...aggregations of fish based on higher-order statistical measures describable in terms of physical and system parameters. Environmentally , these models...processing. In this experiment, we had good ground truth on (1) and (2), and had control over (3) and (4) except for environmentally -imposed restrictions

  20. Interpretation of the results of statistical measurements. [search for basic probability model

    NASA Technical Reports Server (NTRS)

    Olshevskiy, V. V.

    1973-01-01

    For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.

  1. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    PubMed

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  2. Children's Services Statistical Neighbour Benchmarking Tool. Practitioner User Guide

    ERIC Educational Resources Information Center

    National Foundation for Educational Research, 2007

    2007-01-01

    Statistical neighbour models provide one method for benchmarking progress. For each local authority (LA), these models designate a number of other LAs deemed to have similar characteristics. These designated LAs are known as statistical neighbours. Any LA may compare its performance (as measured by various indicators) against its statistical…

  3. The Statistical Interpretation of Classical Thermodynamic Heating and Expansion Processes

    ERIC Educational Resources Information Center

    Cartier, Stephen F.

    2011-01-01

    A statistical model has been developed and applied to interpret thermodynamic processes typically presented from the macroscopic, classical perspective. Through this model, students learn and apply the concepts of statistical mechanics, quantum mechanics, and classical thermodynamics in the analysis of the (i) constant volume heating, (ii)…

  4. A Model of Statistics Performance Based on Achievement Goal Theory.

    ERIC Educational Resources Information Center

    Bandalos, Deborah L.; Finney, Sara J.; Geske, Jenenne A.

    2003-01-01

    Tests a model of statistics performance based on achievement goal theory. Both learning and performance goals affected achievement indirectly through study strategies, self-efficacy, and test anxiety. Implications of these findings for teaching and learning statistics are discussed. (Contains 47 references, 3 tables, 3 figures, and 1 appendix.)…

  5. [Statistical prediction methods in violence risk assessment and its application].

    PubMed

    Liu, Yuan-Yuan; Hu, Jun-Mei; Yang, Min; Li, Xiao-Song

    2013-06-01

    It is an urgent global problem how to improve the violence risk assessment. As a necessary part of risk assessment, statistical methods have remarkable impacts and effects. In this study, the predicted methods in violence risk assessment from the point of statistics are reviewed. The application of Logistic regression as the sample of multivariate statistical model, decision tree model as the sample of data mining technique, and neural networks model as the sample of artificial intelligence technology are all reviewed. This study provides data in order to contribute the further research of violence risk assessment.

  6. Non-equilibrium dog-flea model

    NASA Astrophysics Data System (ADS)

    Ackerson, Bruce J.

    2017-11-01

    We develop the open dog-flea model to serve as a check of proposed non-equilibrium theories of statistical mechanics. The model is developed in detail. Then it is applied to four recent models for non-equilibrium statistical mechanics. Comparison of the dog-flea solution with these different models allows checking claims and giving a concrete example of the theoretical models.

  7. Analysis and meta-analysis of single-case designs: an introduction.

    PubMed

    Shadish, William R

    2014-04-01

    The last 10 years have seen great progress in the analysis and meta-analysis of single-case designs (SCDs). This special issue includes five articles that provide an overview of current work on that topic, including standardized mean difference statistics, multilevel models, Bayesian statistics, and generalized additive models. Each article analyzes a common example across articles and presents syntax or macros for how to do them. These articles are followed by commentaries from single-case design researchers and journal editors. This introduction briefly describes each article and then discusses several issues that must be addressed before we can know what analyses will eventually be best to use in SCD research. These issues include modeling trend, modeling error covariances, computing standardized effect size estimates, assessing statistical power, incorporating more accurate models of outcome distributions, exploring whether Bayesian statistics can improve estimation given the small samples common in SCDs, and the need for annotated syntax and graphical user interfaces that make complex statistics accessible to SCD researchers. The article then discusses reasons why SCD researchers are likely to incorporate statistical analyses into their research more often in the future, including changing expectations and contingencies regarding SCD research from outside SCD communities, changes and diversity within SCD communities, corrections of erroneous beliefs about the relationship between SCD research and statistics, and demonstrations of how statistics can help SCD researchers better meet their goals. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  8. Statistical Ensemble of Large Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Carati, Daniele; Rogers, Michael M.; Wray, Alan A.; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    A statistical ensemble of large eddy simulations (LES) is run simultaneously for the same flow. The information provided by the different large scale velocity fields is used to propose an ensemble averaged version of the dynamic model. This produces local model parameters that only depend on the statistical properties of the flow. An important property of the ensemble averaged dynamic procedure is that it does not require any spatial averaging and can thus be used in fully inhomogeneous flows. Also, the ensemble of LES's provides statistics of the large scale velocity that can be used for building new models for the subgrid-scale stress tensor. The ensemble averaged dynamic procedure has been implemented with various models for three flows: decaying isotropic turbulence, forced isotropic turbulence, and the time developing plane wake. It is found that the results are almost independent of the number of LES's in the statistical ensemble provided that the ensemble contains at least 16 realizations.

  9. Comparing estimates of climate change impacts from process-based and statistical crop models

    NASA Astrophysics Data System (ADS)

    Lobell, David B.; Asseng, Senthold

    2017-01-01

    The potential impacts of climate change on crop productivity are of widespread interest to those concerned with addressing climate change and improving global food security. Two common approaches to assess these impacts are process-based simulation models, which attempt to represent key dynamic processes affecting crop yields, and statistical models, which estimate functional relationships between historical observations of weather and yields. Examples of both approaches are increasingly found in the scientific literature, although often published in different disciplinary journals. Here we compare published sensitivities to changes in temperature, precipitation, carbon dioxide (CO2), and ozone from each approach for the subset of crops, locations, and climate scenarios for which both have been applied. Despite a common perception that statistical models are more pessimistic, we find no systematic differences between the predicted sensitivities to warming from process-based and statistical models up to +2 °C, with limited evidence at higher levels of warming. For precipitation, there are many reasons why estimates could be expected to differ, but few estimates exist to develop robust comparisons, and precipitation changes are rarely the dominant factor for predicting impacts given the prominent role of temperature, CO2, and ozone changes. A common difference between process-based and statistical studies is that the former tend to include the effects of CO2 increases that accompany warming, whereas statistical models typically do not. Major needs moving forward include incorporating CO2 effects into statistical studies, improving both approaches’ treatment of ozone, and increasing the use of both methods within the same study. At the same time, those who fund or use crop model projections should understand that in the short-term, both approaches when done well are likely to provide similar estimates of warming impacts, with statistical models generally requiring fewer resources to produce robust estimates, especially when applied to crops beyond the major grains.

  10. Modeling the sound transmission between rooms coupled through partition walls by using a diffusion model.

    PubMed

    Billon, Alexis; Foy, Cédric; Picaut, Judicaël; Valeau, Vincent; Sakout, Anas

    2008-06-01

    In this paper, a modification of the diffusion model for room acoustics is proposed to account for sound transmission between two rooms, a source room and an adjacent room, which are coupled through a partition wall. A system of two diffusion equations, one for each room, together with a set of two boundary conditions, one for the partition wall and one for the other walls of a room, is obtained and numerically solved. The modified diffusion model is validated by numerical comparisons with the statistical theory for several coupled-room configurations by varying the coupling area surface, the absorption coefficient of each room, and the volume of the adjacent room. An experimental comparison is also carried out for two coupled classrooms. The modified diffusion model results agree very well with both the statistical theory and the experimental data. The diffusion model can then be used as an alternative to the statistical theory, especially when the statistical theory is not applicable, that is, when the reverberant sound field is not diffuse. Moreover, the diffusion model allows the prediction of the spatial distribution of sound energy within each coupled room, while the statistical theory gives only one sound level for each room.

  11. An order statistics approach to the halo model for galaxies

    NASA Astrophysics Data System (ADS)

    Paul, Niladri; Paranjape, Aseem; Sheth, Ravi K.

    2017-04-01

    We use the halo model to explore the implications of assuming that galaxy luminosities in groups are randomly drawn from an underlying luminosity function. We show that even the simplest of such order statistics models - one in which this luminosity function p(L) is universal - naturally produces a number of features associated with previous analyses based on the 'central plus Poisson satellites' hypothesis. These include the monotonic relation of mean central luminosity with halo mass, the lognormal distribution around this mean and the tight relation between the central and satellite mass scales. In stark contrast to observations of galaxy clustering; however, this model predicts no luminosity dependence of large-scale clustering. We then show that an extended version of this model, based on the order statistics of a halo mass dependent luminosity function p(L|m), is in much better agreement with the clustering data as well as satellite luminosities, but systematically underpredicts central luminosities. This brings into focus the idea that central galaxies constitute a distinct population that is affected by different physical processes than are the satellites. We model this physical difference as a statistical brightening of the central luminosities, over and above the order statistics prediction. The magnitude gap between the brightest and second brightest group galaxy is predicted as a by-product, and is also in good agreement with observations. We propose that this order statistics framework provides a useful language in which to compare the halo model for galaxies with more physically motivated galaxy formation models.

  12. How to interpret the results of medical time series data analysis: Classical statistical approaches versus dynamic Bayesian network modeling.

    PubMed

    Onisko, Agnieszka; Druzdzel, Marek J; Austin, R Marshall

    2016-01-01

    Classical statistics is a well-established approach in the analysis of medical data. While the medical community seems to be familiar with the concept of a statistical analysis and its interpretation, the Bayesian approach, argued by many of its proponents to be superior to the classical frequentist approach, is still not well-recognized in the analysis of medical data. The goal of this study is to encourage data analysts to use the Bayesian approach, such as modeling with graphical probabilistic networks, as an insightful alternative to classical statistical analysis of medical data. This paper offers a comparison of two approaches to analysis of medical time series data: (1) classical statistical approach, such as the Kaplan-Meier estimator and the Cox proportional hazards regression model, and (2) dynamic Bayesian network modeling. Our comparison is based on time series cervical cancer screening data collected at Magee-Womens Hospital, University of Pittsburgh Medical Center over 10 years. The main outcomes of our comparison are cervical cancer risk assessments produced by the three approaches. However, our analysis discusses also several aspects of the comparison, such as modeling assumptions, model building, dealing with incomplete data, individualized risk assessment, results interpretation, and model validation. Our study shows that the Bayesian approach is (1) much more flexible in terms of modeling effort, and (2) it offers an individualized risk assessment, which is more cumbersome for classical statistical approaches.

  13. High-temperature behavior of a deformed Fermi gas obeying interpolating statistics.

    PubMed

    Algin, Abdullah; Senay, Mustafa

    2012-04-01

    An outstanding idea originally introduced by Greenberg is to investigate whether there is equivalence between intermediate statistics, which may be different from anyonic statistics, and q-deformed particle algebra. Also, a model to be studied for addressing such an idea could possibly provide us some new consequences about the interactions of particles as well as their internal structures. Motivated mainly by this idea, in this work, we consider a q-deformed Fermi gas model whose statistical properties enable us to effectively study interpolating statistics. Starting with a generalized Fermi-Dirac distribution function, we derive several thermostatistical functions of a gas of these deformed fermions in the thermodynamical limit. We study the high-temperature behavior of the system by analyzing the effects of q deformation on the most important thermostatistical characteristics of the system such as the entropy, specific heat, and equation of state. It is shown that such a deformed fermion model in two and three spatial dimensions exhibits the interpolating statistics in a specific interval of the model deformation parameter 0 < q < 1. In particular, for two and three spatial dimensions, it is found from the behavior of the third virial coefficient of the model that the deformation parameter q interpolates completely between attractive and repulsive systems, including the free boson and fermion cases. From the results obtained in this work, we conclude that such a model could provide much physical insight into some interacting theories of fermions, and could be useful to further study the particle systems with intermediate statistics.

  14. Progress of statistical analysis in biomedical research through the historical review of the development of the Framingham score.

    PubMed

    Ignjatović, Aleksandra; Stojanović, Miodrag; Milošević, Zoran; Anđelković Apostolović, Marija

    2017-12-02

    The interest in developing risk models in medicine not only is appealing, but also associated with many obstacles in different aspects of predictive model development. Initially, the association of biomarkers or the association of more markers with the specific outcome was proven by statistical significance, but novel and demanding questions required the development of new and more complex statistical techniques. Progress of statistical analysis in biomedical research can be observed the best through the history of the Framingham study and development of the Framingham score. Evaluation of predictive models comes from a combination of the facts which are results of several metrics. Using logistic regression and Cox proportional hazards regression analysis, the calibration test, and the ROC curve analysis should be mandatory and eliminatory, and the central place should be taken by some new statistical techniques. In order to obtain complete information related to the new marker in the model, recently, there is a recommendation to use the reclassification tables by calculating the net reclassification index and the integrated discrimination improvement. Decision curve analysis is a novel method for evaluating the clinical usefulness of a predictive model. It may be noted that customizing and fine-tuning of the Framingham risk score initiated the development of statistical analysis. Clinically applicable predictive model should be a trade-off between all abovementioned statistical metrics, a trade-off between calibration and discrimination, accuracy and decision-making, costs and benefits, and quality and quantity of patient's life.

  15. Canonical Statistical Model for Maximum Expected Immission of Wire Conductor in an Aperture Enclosure

    NASA Technical Reports Server (NTRS)

    Bremner, Paul G.; Vazquez, Gabriel; Christiano, Daniel J.; Trout, Dawn H.

    2016-01-01

    Prediction of the maximum expected electromagnetic pick-up of conductors inside a realistic shielding enclosure is an important canonical problem for system-level EMC design of space craft, launch vehicles, aircraft and automobiles. This paper introduces a simple statistical power balance model for prediction of the maximum expected current in a wire conductor inside an aperture enclosure. It calculates both the statistical mean and variance of the immission from the physical design parameters of the problem. Familiar probability density functions can then be used to predict the maximum expected immission for deign purposes. The statistical power balance model requires minimal EMC design information and solves orders of magnitude faster than existing numerical models, making it ultimately viable for scaled-up, full system-level modeling. Both experimental test results and full wave simulation results are used to validate the foundational model.

  16. Directional statistics-based reflectance model for isotropic bidirectional reflectance distribution functions.

    PubMed

    Nishino, Ko; Lombardi, Stephen

    2011-01-01

    We introduce a novel parametric bidirectional reflectance distribution function (BRDF) model that can accurately encode a wide variety of real-world isotropic BRDFs with a small number of parameters. The key observation we make is that a BRDF may be viewed as a statistical distribution on a unit hemisphere. We derive a novel directional statistics distribution, which we refer to as the hemispherical exponential power distribution, and model real-world isotropic BRDFs as mixtures of it. We derive a canonical probabilistic method for estimating the parameters, including the number of components, of this novel directional statistics BRDF model. We show that the model captures the full spectrum of real-world isotropic BRDFs with high accuracy, but a small footprint. We also demonstrate the advantages of the novel BRDF model by showing its use for reflection component separation and for exploring the space of isotropic BRDFs.

  17. A Conditional Curie-Weiss Model for Stylized Multi-group Binary Choice with Social Interaction

    NASA Astrophysics Data System (ADS)

    Opoku, Alex Akwasi; Edusei, Kwame Owusu; Ansah, Richard Kwame

    2018-04-01

    This paper proposes a conditional Curie-Weiss model as a model for decision making in a stylized society made up of binary decision makers that face a particular dichotomous choice between two options. Following Brock and Durlauf (Discrete choice with social interaction I: theory, 1955), we set-up both socio-economic and statistical mechanical models for the choice problem. We point out when both the socio-economic and statistical mechanical models give rise to the same self-consistent equilibrium mean choice level(s). Phase diagram of the associated statistical mechanical model and its socio-economic implications are discussed.

  18. Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics

    PubMed Central

    Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter

    2010-01-01

    Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575

  19. Computational and Statistical Models: A Comparison for Policy Modeling of Childhood Obesity

    NASA Astrophysics Data System (ADS)

    Mabry, Patricia L.; Hammond, Ross; Ip, Edward Hak-Sing; Huang, Terry T.-K.

    As systems science methodologies have begun to emerge as a set of innovative approaches to address complex problems in behavioral, social science, and public health research, some apparent conflicts with traditional statistical methodologies for public health have arisen. Computational modeling is an approach set in context that integrates diverse sources of data to test the plausibility of working hypotheses and to elicit novel ones. Statistical models are reductionist approaches geared towards proving the null hypothesis. While these two approaches may seem contrary to each other, we propose that they are in fact complementary and can be used jointly to advance solutions to complex problems. Outputs from statistical models can be fed into computational models, and outputs from computational models can lead to further empirical data collection and statistical models. Together, this presents an iterative process that refines the models and contributes to a greater understanding of the problem and its potential solutions. The purpose of this panel is to foster communication and understanding between statistical and computational modelers. Our goal is to shed light on the differences between the approaches and convey what kinds of research inquiries each one is best for addressing and how they can serve complementary (and synergistic) roles in the research process, to mutual benefit. For each approach the panel will cover the relevant "assumptions" and how the differences in what is assumed can foster misunderstandings. The interpretations of the results from each approach will be compared and contrasted and the limitations for each approach will be delineated. We will use illustrative examples from CompMod, the Comparative Modeling Network for Childhood Obesity Policy. The panel will also incorporate interactive discussions with the audience on the issues raised here.

  20. Statistical characteristics of trajectories of diamagnetic unicellular organisms in a magnetic field.

    PubMed

    Gorobets, Yu I; Gorobets, O Yu

    2015-01-01

    The statistical model is proposed in this paper for description of orientation of trajectories of unicellular diamagnetic organisms in a magnetic field. The statistical parameter such as the effective energy is calculated on basis of this model. The resulting effective energy is the statistical characteristics of trajectories of diamagnetic microorganisms in a magnetic field connected with their metabolism. The statistical model is applicable for the case when the energy of the thermal motion of bacteria is negligible in comparison with their energy in a magnetic field and the bacteria manifest the significant "active random movement", i.e. there is the randomizing motion of the bacteria of non thermal nature, for example, movement of bacteria by means of flagellum. The energy of the randomizing active self-motion of bacteria is characterized by the new statistical parameter for biological objects. The parameter replaces the energy of the randomizing thermal motion in calculation of the statistical distribution. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Modified Distribution-Free Goodness-of-Fit Test Statistic.

    PubMed

    Chun, So Yeon; Browne, Michael W; Shapiro, Alexander

    2018-03-01

    Covariance structure analysis and its structural equation modeling extensions have become one of the most widely used methodologies in social sciences such as psychology, education, and economics. An important issue in such analysis is to assess the goodness of fit of a model under analysis. One of the most popular test statistics used in covariance structure analysis is the asymptotically distribution-free (ADF) test statistic introduced by Browne (Br J Math Stat Psychol 37:62-83, 1984). The ADF statistic can be used to test models without any specific distribution assumption (e.g., multivariate normal distribution) of the observed data. Despite its advantage, it has been shown in various empirical studies that unless sample sizes are extremely large, this ADF statistic could perform very poorly in practice. In this paper, we provide a theoretical explanation for this phenomenon and further propose a modified test statistic that improves the performance in samples of realistic size. The proposed statistic deals with the possible ill-conditioning of the involved large-scale covariance matrices.

  2. Probability, statistics, and computational science.

    PubMed

    Beerenwinkel, Niko; Siebourg, Juliane

    2012-01-01

    In this chapter, we review basic concepts from probability theory and computational statistics that are fundamental to evolutionary genomics. We provide a very basic introduction to statistical modeling and discuss general principles, including maximum likelihood and Bayesian inference. Markov chains, hidden Markov models, and Bayesian network models are introduced in more detail as they occur frequently and in many variations in genomics applications. In particular, we discuss efficient inference algorithms and methods for learning these models from partially observed data. Several simple examples are given throughout the text, some of which point to models that are discussed in more detail in subsequent chapters.

  3. Peer Review of EPA's Draft BMDS Document: Exponential ...

    EPA Pesticide Factsheets

    BMDS is one of the Agency's premier tools for estimating risk assessments, therefore the validity and reliability of its statistical models are of paramount importance. This page provides links to peer review of the BMDS applications and its models as they were developed and eventually released documenting the rigorous review process taken to provide the best science tools available for statistical modeling. This page provides links to peer review of the BMDS applications and its models as they were developed and eventually released documenting the rigorous review process taken to provide the best science tools available for statistical modeling.

  4. Probability of Detection (POD) as a statistical model for the validation of qualitative methods.

    PubMed

    Wehling, Paul; LaBudde, Robert A; Brunelle, Sharon L; Nelson, Maria T

    2011-01-01

    A statistical model is presented for use in validation of qualitative methods. This model, termed Probability of Detection (POD), harmonizes the statistical concepts and parameters between quantitative and qualitative method validation. POD characterizes method response with respect to concentration as a continuous variable. The POD model provides a tool for graphical representation of response curves for qualitative methods. In addition, the model allows comparisons between candidate and reference methods, and provides calculations of repeatability, reproducibility, and laboratory effects from collaborative study data. Single laboratory study and collaborative study examples are given.

  5. Statistical error model for a solar electric propulsion thrust subsystem

    NASA Technical Reports Server (NTRS)

    Bantell, M. H.

    1973-01-01

    The solar electric propulsion thrust subsystem statistical error model was developed as a tool for investigating the effects of thrust subsystem parameter uncertainties on navigation accuracy. The model is currently being used to evaluate the impact of electric engine parameter uncertainties on navigation system performance for a baseline mission to Encke's Comet in the 1980s. The data given represent the next generation in statistical error modeling for low-thrust applications. Principal improvements include the representation of thrust uncertainties and random process modeling in terms of random parametric variations in the thrust vector process for a multi-engine configuration.

  6. Two Paradoxes in Linear Regression Analysis.

    PubMed

    Feng, Ge; Peng, Jing; Tu, Dongke; Zheng, Julia Z; Feng, Changyong

    2016-12-25

    Regression is one of the favorite tools in applied statistics. However, misuse and misinterpretation of results from regression analysis are common in biomedical research. In this paper we use statistical theory and simulation studies to clarify some paradoxes around this popular statistical method. In particular, we show that a widely used model selection procedure employed in many publications in top medical journals is wrong. Formal procedures based on solid statistical theory should be used in model selection.

  7. Autoregressive statistical pattern recognition algorithms for damage detection in civil structures

    NASA Astrophysics Data System (ADS)

    Yao, Ruigen; Pakzad, Shamim N.

    2012-08-01

    Statistical pattern recognition has recently emerged as a promising set of complementary methods to system identification for automatic structural damage assessment. Its essence is to use well-known concepts in statistics for boundary definition of different pattern classes, such as those for damaged and undamaged structures. In this paper, several statistical pattern recognition algorithms using autoregressive models, including statistical control charts and hypothesis testing, are reviewed as potentially competitive damage detection techniques. To enhance the performance of statistical methods, new feature extraction techniques using model spectra and residual autocorrelation, together with resampling-based threshold construction methods, are proposed. Subsequently, simulated acceleration data from a multi degree-of-freedom system is generated to test and compare the efficiency of the existing and proposed algorithms. Data from laboratory experiments conducted on a truss and a large-scale bridge slab model are then used to further validate the damage detection methods and demonstrate the superior performance of proposed algorithms.

  8. Statistical Methodologies to Integrate Experimental and Computational Research

    NASA Technical Reports Server (NTRS)

    Parker, P. A.; Johnson, R. T.; Montgomery, D. C.

    2008-01-01

    Development of advanced algorithms for simulating engine flow paths requires the integration of fundamental experiments with the validation of enhanced mathematical models. In this paper, we provide an overview of statistical methods to strategically and efficiently conduct experiments and computational model refinement. Moreover, the integration of experimental and computational research efforts is emphasized. With a statistical engineering perspective, scientific and engineering expertise is combined with statistical sciences to gain deeper insights into experimental phenomenon and code development performance; supporting the overall research objectives. The particular statistical methods discussed are design of experiments, response surface methodology, and uncertainty analysis and planning. Their application is illustrated with a coaxial free jet experiment and a turbulence model refinement investigation. Our goal is to provide an overview, focusing on concepts rather than practice, to demonstrate the benefits of using statistical methods in research and development, thereby encouraging their broader and more systematic application.

  9. Comparison of Neural Network and Linear Regression Models in Statistically Predicting Mental and Physical Health Status of Breast Cancer Survivors

    DTIC Science & Technology

    2015-07-15

    Long-term effects on cancer survivors’ quality of life of physical training versus physical training combined with cognitive-behavioral therapy ...COMPARISON OF NEURAL NETWORK AND LINEAR REGRESSION MODELS IN STATISTICALLY PREDICTING MENTAL AND PHYSICAL HEALTH STATUS OF BREAST...34Comparison of Neural Network and Linear Regression Models in Statistically Predicting Mental and Physical Health Status of Breast Cancer Survivors

  10. Full Counting Statistics for Interacting Fermions with Determinantal Quantum Monte Carlo Simulations.

    PubMed

    Humeniuk, Stephan; Büchler, Hans Peter

    2017-12-08

    We present a method for computing the full probability distribution function of quadratic observables such as particle number or magnetization for the Fermi-Hubbard model within the framework of determinantal quantum Monte Carlo calculations. Especially in cold atom experiments with single-site resolution, such a full counting statistics can be obtained from repeated projective measurements. We demonstrate that the full counting statistics can provide important information on the size of preformed pairs. Furthermore, we compute the full counting statistics of the staggered magnetization in the repulsive Hubbard model at half filling and find excellent agreement with recent experimental results. We show that current experiments are capable of probing the difference between the Hubbard model and the limiting Heisenberg model.

  11. Geographic and temporal validity of prediction models: Different approaches were useful to examine model performance

    PubMed Central

    Austin, Peter C.; van Klaveren, David; Vergouwe, Yvonne; Nieboer, Daan; Lee, Douglas S.; Steyerberg, Ewout W.

    2017-01-01

    Objective Validation of clinical prediction models traditionally refers to the assessment of model performance in new patients. We studied different approaches to geographic and temporal validation in the setting of multicenter data from two time periods. Study Design and Setting We illustrated different analytic methods for validation using a sample of 14,857 patients hospitalized with heart failure at 90 hospitals in two distinct time periods. Bootstrap resampling was used to assess internal validity. Meta-analytic methods were used to assess geographic transportability. Each hospital was used once as a validation sample, with the remaining hospitals used for model derivation. Hospital-specific estimates of discrimination (c-statistic) and calibration (calibration intercepts and slopes) were pooled using random effects meta-analysis methods. I2 statistics and prediction interval width quantified geographic transportability. Temporal transportability was assessed using patients from the earlier period for model derivation and patients from the later period for model validation. Results Estimates of reproducibility, pooled hospital-specific performance, and temporal transportability were on average very similar, with c-statistics of 0.75. Between-hospital variation was moderate according to I2 statistics and prediction intervals for c-statistics. Conclusion This study illustrates how performance of prediction models can be assessed in settings with multicenter data at different time periods. PMID:27262237

  12. A statistical rain attenuation prediction model with application to the advanced communication technology satellite project. Part 2: Theoretical development of a dynamic model and application to rain fade durations and tolerable control delays for fade countermeasures

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.

    1987-01-01

    A dynamic rain attenuation prediction model is developed for use in obtaining the temporal characteristics, on time scales of minutes or hours, of satellite communication link availability. Analagous to the associated static rain attenuation model, which yields yearly attenuation predictions, this dynamic model is applicable at any location in the world that is characterized by the static rain attenuation statistics peculiar to the geometry of the satellite link and the rain statistics of the location. Such statistics are calculated by employing the formalism of Part I of this report. In fact, the dynamic model presented here is an extension of the static model and reduces to the static model in the appropriate limit. By assuming that rain attenuation is dynamically described by a first-order stochastic differential equation in time and that this random attenuation process is a Markov process, an expression for the associated transition probability is obtained by solving the related forward Kolmogorov equation. This transition probability is then used to obtain such temporal rain attenuation statistics as attenuation durations and allowable attenuation margins versus control system delay.

  13. Scaled test statistics and robust standard errors for non-normal data in covariance structure analysis: a Monte Carlo study.

    PubMed

    Chou, C P; Bentler, P M; Satorra, A

    1991-11-01

    Research studying robustness of maximum likelihood (ML) statistics in covariance structure analysis has concluded that test statistics and standard errors are biased under severe non-normality. An estimation procedure known as asymptotic distribution free (ADF), making no distributional assumption, has been suggested to avoid these biases. Corrections to the normal theory statistics to yield more adequate performance have also been proposed. This study compares the performance of a scaled test statistic and robust standard errors for two models under several non-normal conditions and also compares these with the results from ML and ADF methods. Both ML and ADF test statistics performed rather well in one model and considerably worse in the other. In general, the scaled test statistic seemed to behave better than the ML test statistic and the ADF statistic performed the worst. The robust and ADF standard errors yielded more appropriate estimates of sampling variability than the ML standard errors, which were usually downward biased, in both models under most of the non-normal conditions. ML test statistics and standard errors were found to be quite robust to the violation of the normality assumption when data had either symmetric and platykurtic distributions, or non-symmetric and zero kurtotic distributions.

  14. Quantifying the impact of between-study heterogeneity in multivariate meta-analyses

    PubMed Central

    Jackson, Dan; White, Ian R; Riley, Richard D

    2012-01-01

    Measures that quantify the impact of heterogeneity in univariate meta-analysis, including the very popular I2 statistic, are now well established. Multivariate meta-analysis, where studies provide multiple outcomes that are pooled in a single analysis, is also becoming more commonly used. The question of how to quantify heterogeneity in the multivariate setting is therefore raised. It is the univariate R2 statistic, the ratio of the variance of the estimated treatment effect under the random and fixed effects models, that generalises most naturally, so this statistic provides our basis. This statistic is then used to derive a multivariate analogue of I2, which we call . We also provide a multivariate H2 statistic, the ratio of a generalisation of Cochran's heterogeneity statistic and its associated degrees of freedom, with an accompanying generalisation of the usual I2 statistic, . Our proposed heterogeneity statistics can be used alongside all the usual estimates and inferential procedures used in multivariate meta-analysis. We apply our methods to some real datasets and show how our statistics are equally appropriate in the context of multivariate meta-regression, where study level covariate effects are included in the model. Our heterogeneity statistics may be used when applying any procedure for fitting the multivariate random effects model. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22763950

  15. Comparing geological and statistical approaches for element selection in sediment tracing research

    NASA Astrophysics Data System (ADS)

    Laceby, J. Patrick; McMahon, Joe; Evrard, Olivier; Olley, Jon

    2015-04-01

    Elevated suspended sediment loads reduce reservoir capacity and significantly increase the cost of operating water treatment infrastructure, making the management of sediment supply to reservoirs of increasingly importance. Sediment fingerprinting techniques can be used to determine the relative contributions of different sources of sediment accumulating in reservoirs. The objective of this research is to compare geological and statistical approaches to element selection for sediment fingerprinting modelling. Time-integrated samplers (n=45) were used to obtain source samples from four major subcatchments flowing into the Baroon Pocket Dam in South East Queensland, Australia. The geochemistry of potential sources were compared to the geochemistry of sediment cores (n=12) sampled in the reservoir. The geochemical approach selected elements for modelling that provided expected, observed and statistical discrimination between sediment sources. Two statistical approaches selected elements for modelling with the Kruskal-Wallis H-test and Discriminatory Function Analysis (DFA). In particular, two different significance levels (0.05 & 0.35) for the DFA were included to investigate the importance of element selection on modelling results. A distribution model determined the relative contributions of different sources to sediment sampled in the Baroon Pocket Dam. Elemental discrimination was expected between one subcatchment (Obi Obi Creek) and the remaining subcatchments (Lexys, Falls and Bridge Creek). Six major elements were expected to provide discrimination. Of these six, only Fe2O3 and SiO2 provided expected, observed and statistical discrimination. Modelling results with this geological approach indicated 36% (+/- 9%) of sediment sampled in the reservoir cores were from mafic-derived sources and 64% (+/- 9%) were from felsic-derived sources. The geological and the first statistical approach (DFA0.05) differed by only 1% (σ 5%) for 5 out of 6 model groupings with only the Lexys Creek modelling results differing significantly (35%). The statistical model with expanded elemental selection (DFA0.35) differed from the geological model by an average of 30% for all 6 models. Elemental selection for sediment fingerprinting therefore has the potential to impact modeling results. Accordingly is important to incorporate both robust geological and statistical approaches when selecting elements for sediment fingerprinting. For the Baroon Pocket Dam, management should focus on reducing the supply of sediments derived from felsic sources in each of the subcatchments.

  16. Teacher Effects, Value-Added Models, and Accountability

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2014-01-01

    Background: In the last decade, the effects of teachers on student performance (typically manifested as state-wide standardized tests) have been re-examined using statistical models that are known as value-added models. These statistical models aim to compute the unique contribution of the teachers in promoting student achievement gains from grade…

  17. Some Statistics for Assessing Person-Fit Based on Continuous-Response Models

    ERIC Educational Resources Information Center

    Ferrando, Pere Joan

    2010-01-01

    This article proposes several statistics for assessing individual fit based on two unidimensional models for continuous responses: linear factor analysis and Samejima's continuous response model. Both models are approached using a common framework based on underlying response variables and are formulated at the individual level as fixed regression…

  18. Statistical Modeling for Radiation Hardness Assurance

    NASA Technical Reports Server (NTRS)

    Ladbury, Raymond L.

    2014-01-01

    We cover the models and statistics associated with single event effects (and total ionizing dose), why we need them, and how to use them: What models are used, what errors exist in real test data, and what the model allows us to say about the DUT will be discussed. In addition, how to use other sources of data such as historical, heritage, and similar part and how to apply experience, physics, and expert opinion to the analysis will be covered. Also included will be concepts of Bayesian statistics, data fitting, and bounding rates.

  19. Strategies for Testing Statistical and Practical Significance in Detecting DIF with Logistic Regression Models

    ERIC Educational Resources Information Center

    Fidalgo, Angel M.; Alavi, Seyed Mohammad; Amirian, Seyed Mohammad Reza

    2014-01-01

    This study examines three controversial aspects in differential item functioning (DIF) detection by logistic regression (LR) models: first, the relative effectiveness of different analytical strategies for detecting DIF; second, the suitability of the Wald statistic for determining the statistical significance of the parameters of interest; and…

  20. Interpolative modeling of GaAs FET S-parameter data bases for use in Monte Carlo simulations

    NASA Technical Reports Server (NTRS)

    Campbell, L.; Purviance, J.

    1992-01-01

    A statistical interpolation technique is presented for modeling GaAs FET S-parameter measurements for use in the statistical analysis and design of circuits. This is accomplished by interpolating among the measurements in a GaAs FET S-parameter data base in a statistically valid manner.

  1. The Importance of Statistical Modeling in Data Analysis and Inference

    ERIC Educational Resources Information Center

    Rollins, Derrick, Sr.

    2017-01-01

    Statistical inference simply means to draw a conclusion based on information that comes from data. Error bars are the most commonly used tool for data analysis and inference in chemical engineering data studies. This work demonstrates, using common types of data collection studies, the importance of specifying the statistical model for sound…

  2. Evaluating Item Fit for Multidimensional Item Response Models

    ERIC Educational Resources Information Center

    Zhang, Bo; Stone, Clement A.

    2008-01-01

    This research examines the utility of the s-x[superscript 2] statistic proposed by Orlando and Thissen (2000) in evaluating item fit for multidimensional item response models. Monte Carlo simulation was conducted to investigate both the Type I error and statistical power of this fit statistic in analyzing two kinds of multidimensional test…

  3. Educational Statistics and School Improvement. Statistics and the Federal Role in Education.

    ERIC Educational Resources Information Center

    Hawley, Willis D.

    This paper focuses on how educational statistics might better serve the quest for educational improvement in elementary and secondary schools. A model for conceptualizing the sources and processes of school productivity is presented. The Learning Productivity Model suggests that school outcomes are the consequence of the interaction of five…

  4. Teaching Engineering Statistics with Technology, Group Learning, Contextual Projects, Simulation Models and Student Presentations

    ERIC Educational Resources Information Center

    Romeu, Jorge Luis

    2008-01-01

    This article discusses our teaching approach in graduate level Engineering Statistics. It is based on the use of modern technology, learning groups, contextual projects, simulation models, and statistical and simulation software to entice student motivation. The use of technology to facilitate group projects and presentations, and to generate,…

  5. Comparison of climate envelope models developed using expert-selected variables versus statistical selection

    USGS Publications Warehouse

    Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romañach, Stephanie; Watling, James I.; Mazzotti, Frank J.

    2017-01-01

    Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (<40%) between the two methods Despite these differences in variable sets (expert versus statistical), models had high performance metrics (>0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using statistical methods of variable selection is a useful first step, especially when there is a need to model a large number of species or expert knowledge of the species is limited. Expert input can then be used to refine models that seem unrealistic or for species that experts believe are particularly sensitive to change. It also emphasizes the importance of using multiple models to reduce uncertainty and improve map outputs for conservation planning. Where outputs overlap or show the same direction of change there is greater certainty in the predictions. Areas of disagreement can be used for learning by asking why the models do not agree, and may highlight areas where additional on-the-ground data collection could improve the models.

  6. Statistical modeling of natural backgrounds in hyperspectral LWIR data

    NASA Astrophysics Data System (ADS)

    Truslow, Eric; Manolakis, Dimitris; Cooley, Thomas; Meola, Joseph

    2016-09-01

    Hyperspectral sensors operating in the long wave infrared (LWIR) have a wealth of applications including remote material identification and rare target detection. While statistical models for modeling surface reflectance in visible and near-infrared regimes have been well studied, models for the temperature and emissivity in the LWIR have not been rigorously investigated. In this paper, we investigate modeling hyperspectral LWIR data using a statistical mixture model for the emissivity and surface temperature. Statistical models for the surface parameters can be used to simulate surface radiances and at-sensor radiance which drives the variability of measured radiance and ultimately the performance of signal processing algorithms. Thus, having models that adequately capture data variation is extremely important for studying performance trades. The purpose of this paper is twofold. First, we study the validity of this model using real hyperspectral data, and compare the relative variability of hyperspectral data in the LWIR and visible and near-infrared (VNIR) regimes. Second, we illustrate how materials that are easily distinguished in the VNIR, may be difficult to separate when imaged in the LWIR.

  7. Global Sensitivity Analysis of Environmental Systems via Multiple Indices based on Statistical Moments of Model Outputs

    NASA Astrophysics Data System (ADS)

    Guadagnini, A.; Riva, M.; Dell'Oca, A.

    2017-12-01

    We propose to ground sensitivity of uncertain parameters of environmental models on a set of indices based on the main (statistical) moments, i.e., mean, variance, skewness and kurtosis, of the probability density function (pdf) of a target model output. This enables us to perform Global Sensitivity Analysis (GSA) of a model in terms of multiple statistical moments and yields a quantification of the impact of model parameters on features driving the shape of the pdf of model output. Our GSA approach includes the possibility of being coupled with the construction of a reduced complexity model that allows approximating the full model response at a reduced computational cost. We demonstrate our approach through a variety of test cases. These include a commonly used analytical benchmark, a simplified model representing pumping in a coastal aquifer, a laboratory-scale tracer experiment, and the migration of fracturing fluid through a naturally fractured reservoir (source) to reach an overlying formation (target). Our strategy allows discriminating the relative importance of model parameters to the four statistical moments considered. We also provide an appraisal of the error associated with the evaluation of our sensitivity metrics by replacing the original system model through the selected surrogate model. Our results suggest that one might need to construct a surrogate model with increasing level of accuracy depending on the statistical moment considered in the GSA. The methodological framework we propose can assist the development of analysis techniques targeted to model calibration, design of experiment, uncertainty quantification and risk assessment.

  8. Asking Sensitive Questions: A Statistical Power Analysis of Randomized Response Models

    ERIC Educational Resources Information Center

    Ulrich, Rolf; Schroter, Hannes; Striegel, Heiko; Simon, Perikles

    2012-01-01

    This article derives the power curves for a Wald test that can be applied to randomized response models when small prevalence rates must be assessed (e.g., detecting doping behavior among elite athletes). These curves enable the assessment of the statistical power that is associated with each model (e.g., Warner's model, crosswise model, unrelated…

  9. WORKSHOP ON APPLICATION OF STATISTICAL METHODS TO BIOLOGICALLY-BASED PHARMACOKINETIC MODELING FOR RISK ASSESSMENT

    EPA Science Inventory

    Biologically-based pharmacokinetic models are being increasingly used in the risk assessment of environmental chemicals. These models are based on biological, mathematical, statistical and engineering principles. Their potential uses in risk assessment include extrapolation betwe...

  10. Counts-in-cylinders in the Sloan Digital Sky Survey with Comparisons to N-body Simulations

    NASA Astrophysics Data System (ADS)

    Berrier, Heather D.; Barton, Elizabeth J.; Berrier, Joel C.; Bullock, James S.; Zentner, Andrew R.; Wechsler, Risa H.

    2011-01-01

    Environmental statistics provide a necessary means of comparing the properties of galaxies in different environments, and a vital test of models of galaxy formation within the prevailing hierarchical cosmological model. We explore counts-in-cylinders, a common statistic defined as the number of companions of a particular galaxy found within a given projected radius and redshift interval. Galaxy distributions with the same two-point correlation functions do not necessarily have the same companion count distributions. We use this statistic to examine the environments of galaxies in the Sloan Digital Sky Survey Data Release 4 (SDSS DR4). We also make preliminary comparisons to four models for the spatial distributions of galaxies, based on N-body simulations and data from SDSS DR4, to study the utility of the counts-in-cylinders statistic. There is a very large scatter between the number of companions a galaxy has and the mass of its parent dark matter halo and the halo occupation, limiting the utility of this statistic for certain kinds of environmental studies. We also show that prevalent empirical models of galaxy clustering, that match observed two- and three-point clustering statistics well, fail to reproduce some aspects of the observed distribution of counts-in-cylinders on 1, 3, and 6 h -1 Mpc scales. All models that we explore underpredict the fraction of galaxies with few or no companions in 3 and 6 h -1 Mpc cylinders. Roughly 7% of galaxies in the real universe are significantly more isolated within a 6 h -1 Mpc cylinder than the galaxies in any of the models we use. Simple phenomenological models that map galaxies to dark matter halos fail to reproduce high-order clustering statistics in low-density environments.

  11. Atmospheric Tracer Inverse Modeling Using Markov Chain Monte Carlo (MCMC)

    NASA Astrophysics Data System (ADS)

    Kasibhatla, P.

    2004-12-01

    In recent years, there has been an increasing emphasis on the use of Bayesian statistical estimation techniques to characterize the temporal and spatial variability of atmospheric trace gas sources and sinks. The applications have been varied in terms of the particular species of interest, as well as in terms of the spatial and temporal resolution of the estimated fluxes. However, one common characteristic has been the use of relatively simple statistical models for describing the measurement and chemical transport model error statistics and prior source statistics. For example, multivariate normal probability distribution functions (pdfs) are commonly used to model these quantities and inverse source estimates are derived for fixed values of pdf paramaters. While the advantage of this approach is that closed form analytical solutions for the a posteriori pdfs of interest are available, it is worth exploring Bayesian analysis approaches which allow for a more general treatment of error and prior source statistics. Here, we present an application of the Markov Chain Monte Carlo (MCMC) methodology to an atmospheric tracer inversion problem to demonstrate how more gereral statistical models for errors can be incorporated into the analysis in a relatively straightforward manner. The MCMC approach to Bayesian analysis, which has found wide application in a variety of fields, is a statistical simulation approach that involves computing moments of interest of the a posteriori pdf by efficiently sampling this pdf. The specific inverse problem that we focus on is the annual mean CO2 source/sink estimation problem considered by the TransCom3 project. TransCom3 was a collaborative effort involving various modeling groups and followed a common modeling and analysis protocoal. As such, this problem provides a convenient case study to demonstrate the applicability of the MCMC methodology to atmospheric tracer source/sink estimation problems.

  12. Scale Dependence of Statistics of Spatially Averaged Rain Rate Seen in TOGA COARE Comparison with Predictions from a Stochastic Model

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Bell, T. L.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    A characteristic feature of rainfall statistics is that they in general depend on the space and time scales over which rain data are averaged. As a part of an earlier effort to determine the sampling error of satellite rain averages, a space-time model of rainfall statistics was developed to describe the statistics of gridded rain observed in GATE. The model allows one to compute the second moment statistics of space- and time-averaged rain rate which can be fitted to satellite or rain gauge data to determine the four model parameters appearing in the precipitation spectrum - an overall strength parameter, a characteristic length separating the long and short wavelength regimes and a characteristic relaxation time for decay of the autocorrelation of the instantaneous local rain rate and a certain 'fractal' power law exponent. For area-averaged instantaneous rain rate, this exponent governs the power law dependence of these statistics on the averaging length scale $L$ predicted by the model in the limit of small $L$. In particular, the variance of rain rate averaged over an $L \\times L$ area exhibits a power law singularity as $L \\rightarrow 0$. In the present work the model is used to investigate how the statistics of area-averaged rain rate over the tropical Western Pacific measured with ship borne radar during TOGA COARE (Tropical Ocean Global Atmosphere Coupled Ocean Atmospheric Response Experiment) and gridded on a 2 km grid depends on the size of the spatial averaging scale. Good agreement is found between the data and predictions from the model over a wide range of averaging length scales.

  13. Statistics for characterizing data on the periphery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Theiler, James P; Hush, Donald R

    2010-01-01

    We introduce a class of statistics for characterizing the periphery of a distribution, and show that these statistics are particularly valuable for problems in target detection. Because so many detection algorithms are rooted in Gaussian statistics, we concentrate on ellipsoidal models of high-dimensional data distributions (that is to say: covariance matrices), but we recommend several alternatives to the sample covariance matrix that more efficiently model the periphery of a distribution, and can more effectively detect anomalous data samples.

  14. Two Paradoxes in Linear Regression Analysis

    PubMed Central

    FENG, Ge; PENG, Jing; TU, Dongke; ZHENG, Julia Z.; FENG, Changyong

    2016-01-01

    Summary Regression is one of the favorite tools in applied statistics. However, misuse and misinterpretation of results from regression analysis are common in biomedical research. In this paper we use statistical theory and simulation studies to clarify some paradoxes around this popular statistical method. In particular, we show that a widely used model selection procedure employed in many publications in top medical journals is wrong. Formal procedures based on solid statistical theory should be used in model selection. PMID:28638214

  15. Customizing national models for a medical center's population to rapidly identify patients at high risk of 30-day all-cause hospital readmission following a heart failure hospitalization.

    PubMed

    Cox, Zachary L; Lai, Pikki; Lewis, Connie M; Lindenfeld, JoAnn; Collins, Sean P; Lenihan, Daniel J

    2018-05-28

    Nationally-derived models predicting 30-day readmissions following heart failure (HF) hospitalizations yield insufficient discrimination for institutional use. Develop a customized readmission risk model from Medicare-employed and institutionally-customized risk factors and compare the performance against national models in a medical center. Medicare patients age ≥ 65 years hospitalized for HF (n = 1,454) were studied in a derivation cohort and in a separate validation cohort (n = 243). All 30-day hospital readmissions were documented. The primary outcome was risk discrimination (c-statistic) compared to national models. A customized model demonstrated improved discrimination (c-statistic 0.72; 95% CI 0.69 - 0.74) compared to national models (c-statistics of 0.60 and 0.61) with a c-statistic of 0.63 in the validation cohort. Compared to national models, a customized model demonstrated superior readmission risk profiling by distinguishing a high-risk (38.3%) from a low-risk (9.4%) quartile. A customized model improved readmission risk discrimination from HF hospitalizations compared to national models. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Statistical wind analysis for near-space applications

    NASA Astrophysics Data System (ADS)

    Roney, Jason A.

    2007-09-01

    Statistical wind models were developed based on the existing observational wind data for near-space altitudes between 60 000 and 100 000 ft (18 30 km) above ground level (AGL) at two locations, Akon, OH, USA, and White Sands, NM, USA. These two sites are envisioned as playing a crucial role in the first flights of high-altitude airships. The analysis shown in this paper has not been previously applied to this region of the stratosphere for such an application. Standard statistics were compiled for these data such as mean, median, maximum wind speed, and standard deviation, and the data were modeled with Weibull distributions. These statistics indicated, on a yearly average, there is a lull or a “knee” in the wind between 65 000 and 72 000 ft AGL (20 22 km). From the standard statistics, trends at both locations indicated substantial seasonal variation in the mean wind speed at these heights. The yearly and monthly statistical modeling indicated that Weibull distributions were a reasonable model for the data. Forecasts and hindcasts were done by using a Weibull model based on 2004 data and comparing the model with the 2003 and 2005 data. The 2004 distribution was also a reasonable model for these years. Lastly, the Weibull distribution and cumulative function were used to predict the 50%, 95%, and 99% winds, which are directly related to the expected power requirements of a near-space station-keeping airship. These values indicated that using only the standard deviation of the mean may underestimate the operational conditions.

  17. A Stochastic Fractional Dynamics Model of Space-time Variability of Rain

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Travis, James E.

    2013-01-01

    Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment.

  18. Statistical limitations in functional neuroimaging. I. Non-inferential methods and statistical models.

    PubMed Central

    Petersson, K M; Nichols, T E; Poline, J B; Holmes, A P

    1999-01-01

    Functional neuroimaging (FNI) provides experimental access to the intact living brain making it possible to study higher cognitive functions in humans. In this review and in a companion paper in this issue, we discuss some common methods used to analyse FNI data. The emphasis in both papers is on assumptions and limitations of the methods reviewed. There are several methods available to analyse FNI data indicating that none is optimal for all purposes. In order to make optimal use of the methods available it is important to know the limits of applicability. For the interpretation of FNI results it is also important to take into account the assumptions, approximations and inherent limitations of the methods used. This paper gives a brief overview over some non-inferential descriptive methods and common statistical models used in FNI. Issues relating to the complex problem of model selection are discussed. In general, proper model selection is a necessary prerequisite for the validity of the subsequent statistical inference. The non-inferential section describes methods that, combined with inspection of parameter estimates and other simple measures, can aid in the process of model selection and verification of assumptions. The section on statistical models covers approaches to global normalization and some aspects of univariate, multivariate, and Bayesian models. Finally, approaches to functional connectivity and effective connectivity are discussed. In the companion paper we review issues related to signal detection and statistical inference. PMID:10466149

  19. Ultra-low-dose computed tomographic angiography with model-based iterative reconstruction compared with standard-dose imaging after endovascular aneurysm repair: a prospective pilot study.

    PubMed

    Naidu, Sailen G; Kriegshauser, J Scott; Paden, Robert G; He, Miao; Wu, Qing; Hara, Amy K

    2014-12-01

    An ultra-low-dose radiation protocol reconstructed with model-based iterative reconstruction was compared with our standard-dose protocol. This prospective study evaluated 20 men undergoing surveillance-enhanced computed tomography after endovascular aneurysm repair. All patients underwent standard-dose and ultra-low-dose venous phase imaging; images were compared after reconstruction with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction. Objective measures of aortic contrast attenuation and image noise were averaged. Images were subjectively assessed (1 = worst, 5 = best) for diagnostic confidence, image noise, and vessel sharpness. Aneurysm sac diameter and endoleak detection were compared. Quantitative image noise was 26% less with ultra-low-dose model-based iterative reconstruction than with standard-dose adaptive statistical iterative reconstruction and 58% less than with ultra-low-dose adaptive statistical iterative reconstruction. Average subjective noise scores were not different between ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction (3.8 vs. 4.0, P = .25). Subjective scores for diagnostic confidence were better with standard-dose adaptive statistical iterative reconstruction than with ultra-low-dose model-based iterative reconstruction (4.4 vs. 4.0, P = .002). Vessel sharpness was decreased with ultra-low-dose model-based iterative reconstruction compared with standard-dose adaptive statistical iterative reconstruction (3.3 vs. 4.1, P < .0001). Ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction aneurysm sac diameters were not significantly different (4.9 vs. 4.9 cm); concordance for the presence of endoleak was 100% (P < .001). Compared with a standard-dose technique, an ultra-low-dose model-based iterative reconstruction protocol provides comparable image quality and diagnostic assessment at a 73% lower radiation dose.

  20. Testing statistical self-similarity in the topology of river networks

    USGS Publications Warehouse

    Troutman, Brent M.; Mantilla, Ricardo; Gupta, Vijay K.

    2010-01-01

    Recent work has demonstrated that the topological properties of real river networks deviate significantly from predictions of Shreve's random model. At the same time the property of mean self-similarity postulated by Tokunaga's model is well supported by data. Recently, a new class of network model called random self-similar networks (RSN) that combines self-similarity and randomness has been introduced to replicate important topological features observed in real river networks. We investigate if the hypothesis of statistical self-similarity in the RSN model is supported by data on a set of 30 basins located across the continental United States that encompass a wide range of hydroclimatic variability. We demonstrate that the generators of the RSN model obey a geometric distribution, and self-similarity holds in a statistical sense in 26 of these 30 basins. The parameters describing the distribution of interior and exterior generators are tested to be statistically different and the difference is shown to produce the well-known Hack's law. The inter-basin variability of RSN parameters is found to be statistically significant. We also test generator dependence on two climatic indices, mean annual precipitation and radiative index of dryness. Some indication of climatic influence on the generators is detected, but this influence is not statistically significant with the sample size available. Finally, two key applications of the RSN model to hydrology and geomorphology are briefly discussed.

  1. Gene-Based Association Analysis for Censored Traits Via Fixed Effect Functional Regressions.

    PubMed

    Fan, Ruzong; Wang, Yifan; Yan, Qi; Ding, Ying; Weeks, Daniel E; Lu, Zhaohui; Ren, Haobo; Cook, Richard J; Xiong, Momiao; Swaroop, Anand; Chew, Emily Y; Chen, Wei

    2016-02-01

    Genetic studies of survival outcomes have been proposed and conducted recently, but statistical methods for identifying genetic variants that affect disease progression are rarely developed. Motivated by our ongoing real studies, here we develop Cox proportional hazard models using functional regression (FR) to perform gene-based association analysis of survival traits while adjusting for covariates. The proposed Cox models are fixed effect models where the genetic effects of multiple genetic variants are assumed to be fixed. We introduce likelihood ratio test (LRT) statistics to test for associations between the survival traits and multiple genetic variants in a genetic region. Extensive simulation studies demonstrate that the proposed Cox RF LRT statistics have well-controlled type I error rates. To evaluate power, we compare the Cox FR LRT with the previously developed burden test (BT) in a Cox model and sequence kernel association test (SKAT), which is based on mixed effect Cox models. The Cox FR LRT statistics have higher power than or similar power as Cox SKAT LRT except when 50%/50% causal variants had negative/positive effects and all causal variants are rare. In addition, the Cox FR LRT statistics have higher power than Cox BT LRT. The models and related test statistics can be useful in the whole genome and whole exome association studies. An age-related macular degeneration dataset was analyzed as an example. © 2016 WILEY PERIODICALS, INC.

  2. Gene-based Association Analysis for Censored Traits Via Fixed Effect Functional Regressions

    PubMed Central

    Fan, Ruzong; Wang, Yifan; Yan, Qi; Ding, Ying; Weeks, Daniel E.; Lu, Zhaohui; Ren, Haobo; Cook, Richard J; Xiong, Momiao; Swaroop, Anand; Chew, Emily Y.; Chen, Wei

    2015-01-01

    Summary Genetic studies of survival outcomes have been proposed and conducted recently, but statistical methods for identifying genetic variants that affect disease progression are rarely developed. Motivated by our ongoing real studies, we develop here Cox proportional hazard models using functional regression (FR) to perform gene-based association analysis of survival traits while adjusting for covariates. The proposed Cox models are fixed effect models where the genetic effects of multiple genetic variants are assumed to be fixed. We introduce likelihood ratio test (LRT) statistics to test for associations between the survival traits and multiple genetic variants in a genetic region. Extensive simulation studies demonstrate that the proposed Cox RF LRT statistics have well-controlled type I error rates. To evaluate power, we compare the Cox FR LRT with the previously developed burden test (BT) in a Cox model and sequence kernel association test (SKAT) which is based on mixed effect Cox models. The Cox FR LRT statistics have higher power than or similar power as Cox SKAT LRT except when 50%/50% causal variants had negative/positive effects and all causal variants are rare. In addition, the Cox FR LRT statistics have higher power than Cox BT LRT. The models and related test statistics can be useful in the whole genome and whole exome association studies. An age-related macular degeneration dataset was analyzed as an example. PMID:26782979

  3. Statistical framework for evaluation of climate model simulations by use of climate proxy data from the last millennium - Part 1: Theory

    NASA Astrophysics Data System (ADS)

    Sundberg, R.; Moberg, A.; Hind, A.

    2012-08-01

    A statistical framework for comparing the output of ensemble simulations from global climate models with networks of climate proxy and instrumental records has been developed, focusing on near-surface temperatures for the last millennium. This framework includes the formulation of a joint statistical model for proxy data, instrumental data and simulation data, which is used to optimize a quadratic distance measure for ranking climate model simulations. An essential underlying assumption is that the simulations and the proxy/instrumental series have a shared component of variability that is due to temporal changes in external forcing, such as volcanic aerosol load, solar irradiance or greenhouse gas concentrations. Two statistical tests have been formulated. Firstly, a preliminary test establishes whether a significant temporal correlation exists between instrumental/proxy and simulation data. Secondly, the distance measure is expressed in the form of a test statistic of whether a forced simulation is closer to the instrumental/proxy series than unforced simulations. The proposed framework allows any number of proxy locations to be used jointly, with different seasons, record lengths and statistical precision. The goal is to objectively rank several competing climate model simulations (e.g. with alternative model parameterizations or alternative forcing histories) by means of their goodness of fit to the unobservable true past climate variations, as estimated from noisy proxy data and instrumental observations.

  4. Probability distributions of molecular observables computed from Markov models. II. Uncertainties in observables and their time-evolution

    NASA Astrophysics Data System (ADS)

    Chodera, John D.; Noé, Frank

    2010-09-01

    Discrete-state Markov (or master equation) models provide a useful simplified representation for characterizing the long-time statistical evolution of biomolecules in a manner that allows direct comparison with experiments as well as the elucidation of mechanistic pathways for an inherently stochastic process. A vital part of meaningful comparison with experiment is the characterization of the statistical uncertainty in the predicted experimental measurement, which may take the form of an equilibrium measurement of some spectroscopic signal, the time-evolution of this signal following a perturbation, or the observation of some statistic (such as the correlation function) of the equilibrium dynamics of a single molecule. Without meaningful error bars (which arise from both approximation and statistical error), there is no way to determine whether the deviations between model and experiment are statistically meaningful. Previous work has demonstrated that a Bayesian method that enforces microscopic reversibility can be used to characterize the statistical component of correlated uncertainties in state-to-state transition probabilities (and functions thereof) for a model inferred from molecular simulation data. Here, we extend this approach to include the uncertainty in observables that are functions of molecular conformation (such as surrogate spectroscopic signals) characterizing each state, permitting the full statistical uncertainty in computed spectroscopic experiments to be assessed. We test the approach in a simple model system to demonstrate that the computed uncertainties provide a useful indicator of statistical variation, and then apply it to the computation of the fluorescence autocorrelation function measured for a dye-labeled peptide previously studied by both experiment and simulation.

  5. Monte Carlo based statistical power analysis for mediation models: methods and software.

    PubMed

    Zhang, Zhiyong

    2014-12-01

    The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.

  6. Statistical validation of normal tissue complication probability models.

    PubMed

    Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis

    2012-09-01

    To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Effect of Internet-Based Cognitive Apprenticeship Model (i-CAM) on Statistics Learning among Postgraduate Students.

    PubMed

    Saadati, Farzaneh; Ahmad Tarmizi, Rohani; Mohd Ayub, Ahmad Fauzi; Abu Bakar, Kamariah

    2015-01-01

    Because students' ability to use statistics, which is mathematical in nature, is one of the concerns of educators, embedding within an e-learning system the pedagogical characteristics of learning is 'value added' because it facilitates the conventional method of learning mathematics. Many researchers emphasize the effectiveness of cognitive apprenticeship in learning and problem solving in the workplace. In a cognitive apprenticeship learning model, skills are learned within a community of practitioners through observation of modelling and then practice plus coaching. This study utilized an internet-based Cognitive Apprenticeship Model (i-CAM) in three phases and evaluated its effectiveness for improving statistics problem-solving performance among postgraduate students. The results showed that, when compared to the conventional mathematics learning model, the i-CAM could significantly promote students' problem-solving performance at the end of each phase. In addition, the combination of the differences in students' test scores were considered to be statistically significant after controlling for the pre-test scores. The findings conveyed in this paper confirmed the considerable value of i-CAM in the improvement of statistics learning for non-specialized postgraduate students.

  8. Statistically accurate low-order models for uncertainty quantification in turbulent dynamical systems.

    PubMed

    Sapsis, Themistoklis P; Majda, Andrew J

    2013-08-20

    A framework for low-order predictive statistical modeling and uncertainty quantification in turbulent dynamical systems is developed here. These reduced-order, modified quasilinear Gaussian (ROMQG) algorithms apply to turbulent dynamical systems in which there is significant linear instability or linear nonnormal dynamics in the unperturbed system and energy-conserving nonlinear interactions that transfer energy from the unstable modes to the stable modes where dissipation occurs, resulting in a statistical steady state; such turbulent dynamical systems are ubiquitous in geophysical and engineering turbulence. The ROMQG method involves constructing a low-order, nonlinear, dynamical system for the mean and covariance statistics in the reduced subspace that has the unperturbed statistics as a stable fixed point and optimally incorporates the indirect effect of non-Gaussian third-order statistics for the unperturbed system in a systematic calibration stage. This calibration procedure is achieved through information involving only the mean and covariance statistics for the unperturbed equilibrium. The performance of the ROMQG algorithm is assessed on two stringent test cases: the 40-mode Lorenz 96 model mimicking midlatitude atmospheric turbulence and two-layer baroclinic models for high-latitude ocean turbulence with over 125,000 degrees of freedom. In the Lorenz 96 model, the ROMQG algorithm with just a single mode captures the transient response to random or deterministic forcing. For the baroclinic ocean turbulence models, the inexpensive ROMQG algorithm with 252 modes, less than 0.2% of the total, captures the nonlinear response of the energy, the heat flux, and even the one-dimensional energy and heat flux spectra.

  9. Nonparametric estimation and testing of fixed effects panel data models

    PubMed Central

    Henderson, Daniel J.; Carroll, Raymond J.; Li, Qi

    2009-01-01

    In this paper we consider the problem of estimating nonparametric panel data models with fixed effects. We introduce an iterative nonparametric kernel estimator. We also extend the estimation method to the case of a semiparametric partially linear fixed effects model. To determine whether a parametric, semiparametric or nonparametric model is appropriate, we propose test statistics to test between the three alternatives in practice. We further propose a test statistic for testing the null hypothesis of random effects against fixed effects in a nonparametric panel data regression model. Simulations are used to examine the finite sample performance of the proposed estimators and the test statistics. PMID:19444335

  10. A cloud and radiation model-based algorithm for rainfall retrieval from SSM/I multispectral microwave measurements

    NASA Technical Reports Server (NTRS)

    Xiang, Xuwu; Smith, Eric A.; Tripoli, Gregory J.

    1992-01-01

    A hybrid statistical-physical retrieval scheme is explored which combines a statistical approach with an approach based on the development of cloud-radiation models designed to simulate precipitating atmospheres. The algorithm employs the detailed microphysical information from a cloud model as input to a radiative transfer model which generates a cloud-radiation model database. Statistical procedures are then invoked to objectively generate an initial guess composite profile data set from the database. The retrieval algorithm has been tested for a tropical typhoon case using Special Sensor Microwave/Imager (SSM/I) data and has shown satisfactory results.

  11. Vortex dynamics and Lagrangian statistics in a model for active turbulence.

    PubMed

    James, Martin; Wilczek, Michael

    2018-02-14

    Cellular suspensions such as dense bacterial flows exhibit a turbulence-like phase under certain conditions. We study this phenomenon of "active turbulence" statistically by using numerical tools. Following Wensink et al. (Proc. Natl. Acad. Sci. U.S.A. 109, 14308 (2012)), we model active turbulence by means of a generalized Navier-Stokes equation. Two-point velocity statistics of active turbulence, both in the Eulerian and the Lagrangian frame, is explored. We characterize the scale-dependent features of two-point statistics in this system. Furthermore, we extend this statistical study with measurements of vortex dynamics in this system. Our observations suggest that the large-scale statistics of active turbulence is close to Gaussian with sub-Gaussian tails.

  12. Identifiability of PBPK Models with Applications to Dimethylarsinic Acid Exposure

    EPA Science Inventory

    Any statistical model should be identifiable in order for estimates and tests using it to be meaningful. We consider statistical analysis of physiologically-based pharmacokinetic (PBPK) models in which parameters cannot be estimated precisely from available data, and discuss diff...

  13. Improved analyses using function datasets and statistical modeling

    Treesearch

    John S. Hogland; Nathaniel M. Anderson

    2014-01-01

    Raster modeling is an integral component of spatial analysis. However, conventional raster modeling techniques can require a substantial amount of processing time and storage space and have limited statistical functionality and machine learning algorithms. To address this issue, we developed a new modeling framework using C# and ArcObjects and integrated that framework...

  14. The Development of the Children's Services Statistical Neighbour Benchmarking Model. Final Report

    ERIC Educational Resources Information Center

    Benton, Tom; Chamberlain, Tamsin; Wilson, Rebekah; Teeman, David

    2007-01-01

    In April 2006, the Department for Education and Skills (DfES) commissioned the National Foundation for Educational Research (NFER) to conduct an independent external review in order to develop a single "statistical neighbour" model. This single model aimed to combine the key elements of the different models currently available and be…

  15. Investigating Students' Acceptance of a Statistics Learning Platform Using Technology Acceptance Model

    ERIC Educational Resources Information Center

    Song, Yanjie; Kong, Siu-Cheung

    2017-01-01

    The study aims at investigating university students' acceptance of a statistics learning platform to support the learning of statistics in a blended learning context. Three kinds of digital resources, which are simulations, online videos, and online quizzes, were provided on the platform. Premised on the technology acceptance model, we adopted a…

  16. Computational Modeling of Statistical Learning: Effects of Transitional Probability versus Frequency and Links to Word Learning

    ERIC Educational Resources Information Center

    Mirman, Daniel; Estes, Katharine Graf; Magnuson, James S.

    2010-01-01

    Statistical learning mechanisms play an important role in theories of language acquisition and processing. Recurrent neural network models have provided important insights into how these mechanisms might operate. We examined whether such networks capture two key findings in human statistical learning. In Simulation 1, a simple recurrent network…

  17. Statistical power of intervention analyses: simulation and empirical application to treated lumber prices

    Treesearch

    Jeffrey P. Prestemon

    2009-01-01

    Timber product markets are subject to large shocks deriving from natural disturbances and policy shifts. Statistical modeling of shocks is often done to assess their economic importance. In this article, I simulate the statistical power of univariate and bivariate methods of shock detection using time series intervention models. Simulations show that bivariate methods...

  18. A Mediation Model to Explain the Role of Mathematics Skills and Probabilistic Reasoning on Statistics Achievement

    ERIC Educational Resources Information Center

    Primi, Caterina; Donati, Maria Anna; Chiesi, Francesca

    2016-01-01

    Among the wide range of factors related to the acquisition of statistical knowledge, competence in basic mathematics, including basic probability, has received much attention. In this study, a mediation model was estimated to derive the total, direct, and indirect effects of mathematical competence on statistics achievement taking into account…

  19. Factors Influencing the Behavioural Intention to Use Statistical Software: The Perspective of the Slovenian Students of Social Sciences

    ERIC Educational Resources Information Center

    Brezavšcek, Alenka; Šparl, Petra; Žnidaršic, Anja

    2017-01-01

    The aim of the paper is to investigate the main factors influencing the adoption and continuous utilization of statistical software among university social sciences students in Slovenia. Based on the Technology Acceptance Model (TAM), a conceptual model was derived where five external variables were taken into account: statistical software…

  20. Predicting lettuce canopy photosynthesis with statistical and neural network models

    NASA Technical Reports Server (NTRS)

    Frick, J.; Precetti, C.; Mitchell, C. A.

    1998-01-01

    An artificial neural network (NN) and a statistical regression model were developed to predict canopy photosynthetic rates (Pn) for 'Waldman's Green' leaf lettuce (Latuca sativa L.). All data used to develop and test the models were collected for crop stands grown hydroponically and under controlled-environment conditions. In the NN and regression models, canopy Pn was predicted as a function of three independent variables: shootzone CO2 concentration (600 to 1500 micromoles mol-1), photosynthetic photon flux (PPF) (600 to 1100 micromoles m-2 s-1), and canopy age (10 to 20 days after planting). The models were used to determine the combinations of CO2 and PPF setpoints required each day to maintain maximum canopy Pn. The statistical model (a third-order polynomial) predicted Pn more accurately than the simple NN (a three-layer, fully connected net). Over an 11-day validation period, average percent difference between predicted and actual Pn was 12.3% and 24.6% for the statistical and NN models, respectively. Both models lost considerable accuracy when used to determine relatively long-range Pn predictions (> or = 6 days into the future).

  1. Statistical label fusion with hierarchical performance models

    PubMed Central

    Asman, Andrew J.; Dagley, Alexander S.; Landman, Bennett A.

    2014-01-01

    Label fusion is a critical step in many image segmentation frameworks (e.g., multi-atlas segmentation) as it provides a mechanism for generalizing a collection of labeled examples into a single estimate of the underlying segmentation. In the multi-label case, typical label fusion algorithms treat all labels equally – fully neglecting the known, yet complex, anatomical relationships exhibited in the data. To address this problem, we propose a generalized statistical fusion framework using hierarchical models of rater performance. Building on the seminal work in statistical fusion, we reformulate the traditional rater performance model from a multi-tiered hierarchical perspective. This new approach provides a natural framework for leveraging known anatomical relationships and accurately modeling the types of errors that raters (or atlases) make within a hierarchically consistent formulation. Herein, we describe several contributions. First, we derive a theoretical advancement to the statistical fusion framework that enables the simultaneous estimation of multiple (hierarchical) performance models within the statistical fusion context. Second, we demonstrate that the proposed hierarchical formulation is highly amenable to the state-of-the-art advancements that have been made to the statistical fusion framework. Lastly, in an empirical whole-brain segmentation task we demonstrate substantial qualitative and significant quantitative improvement in overall segmentation accuracy. PMID:24817809

  2. Toward statistical modeling of saccadic eye-movement and visual saliency.

    PubMed

    Sun, Xiaoshuai; Yao, Hongxun; Ji, Rongrong; Liu, Xian-Ming

    2014-11-01

    In this paper, we present a unified statistical framework for modeling both saccadic eye movements and visual saliency. By analyzing the statistical properties of human eye fixations on natural images, we found that human attention is sparsely distributed and usually deployed to locations with abundant structural information. This observations inspired us to model saccadic behavior and visual saliency based on super-Gaussian component (SGC) analysis. Our model sequentially obtains SGC using projection pursuit, and generates eye movements by selecting the location with maximum SGC response. Besides human saccadic behavior simulation, we also demonstrated our superior effectiveness and robustness over state-of-the-arts by carrying out dense experiments on synthetic patterns and human eye fixation benchmarks. Multiple key issues in saliency modeling research, such as individual differences, the effects of scale and blur, are explored in this paper. Based on extensive qualitative and quantitative experimental results, we show promising potentials of statistical approaches for human behavior research.

  3. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    PubMed

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Statistical Models for the Analysis and Design of Digital Polymerase Chain Reaction (dPCR) Experiments.

    PubMed

    Dorazio, Robert M; Hunter, Margaret E

    2015-11-03

    Statistical methods for the analysis and design of experiments using digital PCR (dPCR) have received only limited attention and have been misused in many instances. To address this issue and to provide a more general approach to the analysis of dPCR data, we describe a class of statistical models for the analysis and design of experiments that require quantification of nucleic acids. These models are mathematically equivalent to generalized linear models of binomial responses that include a complementary, log-log link function and an offset that is dependent on the dPCR partition volume. These models are both versatile and easy to fit using conventional statistical software. Covariates can be used to specify different sources of variation in nucleic acid concentration, and a model's parameters can be used to quantify the effects of these covariates. For purposes of illustration, we analyzed dPCR data from different types of experiments, including serial dilution, evaluation of copy number variation, and quantification of gene expression. We also showed how these models can be used to help design dPCR experiments, as in selection of sample sizes needed to achieve desired levels of precision in estimates of nucleic acid concentration or to detect differences in concentration among treatments with prescribed levels of statistical power.

  5. Statistical Downscaling and Bias Correction of Climate Model Outputs for Climate Change Impact Assessment in the U.S. Northeast

    NASA Technical Reports Server (NTRS)

    Ahmed, Kazi Farzan; Wang, Guiling; Silander, John; Wilson, Adam M.; Allen, Jenica M.; Horton, Radley; Anyah, Richard

    2013-01-01

    Statistical downscaling can be used to efficiently downscale a large number of General Circulation Model (GCM) outputs to a fine temporal and spatial scale. To facilitate regional impact assessments, this study statistically downscales (to 1/8deg spatial resolution) and corrects the bias of daily maximum and minimum temperature and daily precipitation data from six GCMs and four Regional Climate Models (RCMs) for the northeast United States (US) using the Statistical Downscaling and Bias Correction (SDBC) approach. Based on these downscaled data from multiple models, five extreme indices were analyzed for the future climate to quantify future changes of climate extremes. For a subset of models and indices, results based on raw and bias corrected model outputs for the present-day climate were compared with observations, which demonstrated that bias correction is important not only for GCM outputs, but also for RCM outputs. For future climate, bias correction led to a higher level of agreements among the models in predicting the magnitude and capturing the spatial pattern of the extreme climate indices. We found that the incorporation of dynamical downscaling as an intermediate step does not lead to considerable differences in the results of statistical downscaling for the study domain.

  6. Implications of the methodological choices for hydrologic portrayals of climate change over the contiguous United States: Statistically downscaled forcing data and hydrologic models

    USGS Publications Warehouse

    Mizukami, Naoki; Clark, Martyn P.; Gutmann, Ethan D.; Mendoza, Pablo A.; Newman, Andrew J.; Nijssen, Bart; Livneh, Ben; Hay, Lauren E.; Arnold, Jeffrey R.; Brekke, Levi D.

    2016-01-01

    Continental-domain assessments of climate change impacts on water resources typically rely on statistically downscaled climate model outputs to force hydrologic models at a finer spatial resolution. This study examines the effects of four statistical downscaling methods [bias-corrected constructed analog (BCCA), bias-corrected spatial disaggregation applied at daily (BCSDd) and monthly scales (BCSDm), and asynchronous regression (AR)] on retrospective hydrologic simulations using three hydrologic models with their default parameters (the Community Land Model, version 4.0; the Variable Infiltration Capacity model, version 4.1.2; and the Precipitation–Runoff Modeling System, version 3.0.4) over the contiguous United States (CONUS). Biases of hydrologic simulations forced by statistically downscaled climate data relative to the simulation with observation-based gridded data are presented. Each statistical downscaling method produces different meteorological portrayals including precipitation amount, wet-day frequency, and the energy input (i.e., shortwave radiation), and their interplay affects estimations of precipitation partitioning between evapotranspiration and runoff, extreme runoff, and hydrologic states (i.e., snow and soil moisture). The analyses show that BCCA underestimates annual precipitation by as much as −250 mm, leading to unreasonable hydrologic portrayals over the CONUS for all models. Although the other three statistical downscaling methods produce a comparable precipitation bias ranging from −10 to 8 mm across the CONUS, BCSDd severely overestimates the wet-day fraction by up to 0.25, leading to different precipitation partitioning compared to the simulations with other downscaled data. Overall, the choice of downscaling method contributes to less spread in runoff estimates (by a factor of 1.5–3) than the choice of hydrologic model with use of the default parameters if BCCA is excluded.

  7. Poisson, Poisson-gamma and zero-inflated regression models of motor vehicle crashes: balancing statistical fit and theory.

    PubMed

    Lord, Dominique; Washington, Simon P; Ivan, John N

    2005-01-01

    There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states-perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of "excess" zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to "excess" zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed-and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros.

  8. An empirical comparison of statistical tests for assessing the proportional hazards assumption of Cox's model.

    PubMed

    Ng'andu, N H

    1997-03-30

    In the analysis of survival data using the Cox proportional hazard (PH) model, it is important to verify that the explanatory variables analysed satisfy the proportional hazard assumption of the model. This paper presents results of a simulation study that compares five test statistics to check the proportional hazard assumption of Cox's model. The test statistics were evaluated under proportional hazards and the following types of departures from the proportional hazard assumption: increasing relative hazards; decreasing relative hazards; crossing hazards; diverging hazards, and non-monotonic hazards. The test statistics compared include those based on partitioning of failure time and those that do not require partitioning of failure time. The simulation results demonstrate that the time-dependent covariate test, the weighted residuals score test and the linear correlation test have equally good power for detection of non-proportionality in the varieties of non-proportional hazards studied. Using illustrative data from the literature, these test statistics performed similarly.

  9. Evaluating Structural Equation Models for Categorical Outcomes: A New Test Statistic and a Practical Challenge of Interpretation.

    PubMed

    Monroe, Scott; Cai, Li

    2015-01-01

    This research is concerned with two topics in assessing model fit for categorical data analysis. The first topic involves the application of a limited-information overall test, introduced in the item response theory literature, to structural equation modeling (SEM) of categorical outcome variables. Most popular SEM test statistics assess how well the model reproduces estimated polychoric correlations. In contrast, limited-information test statistics assess how well the underlying categorical data are reproduced. Here, the recently introduced C2 statistic of Cai and Monroe (2014) is applied. The second topic concerns how the root mean square error of approximation (RMSEA) fit index can be affected by the number of categories in the outcome variable. This relationship creates challenges for interpreting RMSEA. While the two topics initially appear unrelated, they may conveniently be studied in tandem since RMSEA is based on an overall test statistic, such as C2. The results are illustrated with an empirical application to data from a large-scale educational survey.

  10. Statistical power to detect violation of the proportional hazards assumption when using the Cox regression model.

    PubMed

    Austin, Peter C

    2018-01-01

    The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.

  11. Statistical power to detect violation of the proportional hazards assumption when using the Cox regression model

    PubMed Central

    Austin, Peter C.

    2017-01-01

    The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest. PMID:29321694

  12. Selecting the right statistical model for analysis of insect count data by using information theoretic measures.

    PubMed

    Sileshi, G

    2006-10-01

    Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.

  13. Development of a funding, cost, and spending model for satellite projects

    NASA Technical Reports Server (NTRS)

    Johnson, Jesse P.

    1989-01-01

    The need for a predictive budget/funging model is obvious. The current models used by the Resource Analysis Office (RAO) are used to predict the total costs of satellite projects. An effort to extend the modeling capabilities from total budget analysis to total budget and budget outlays over time analysis was conducted. A statistical based and data driven methodology was used to derive and develop the model. Th budget data for the last 18 GSFC-sponsored satellite projects were analyzed and used to build a funding model which would describe the historical spending patterns. This raw data consisted of dollars spent in that specific year and their 1989 dollar equivalent. This data was converted to the standard format used by the RAO group and placed in a database. A simple statistical analysis was performed to calculate the gross statistics associated with project length and project cost ant the conditional statistics on project length and project cost. The modeling approach used is derived form the theory of embedded statistics which states that properly analyzed data will produce the underlying generating function. The process of funding large scale projects over extended periods of time is described by Life Cycle Cost Models (LCCM). The data was analyzed to find a model in the generic form of a LCCM. The model developed is based on a Weibull function whose parameters are found by both nonlinear optimization and nonlinear regression. In order to use this model it is necessary to transform the problem from a dollar/time space to a percentage of total budget/time space. This transformation is equivalent to moving to a probability space. By using the basic rules of probability, the validity of both the optimization and the regression steps are insured. This statistically significant model is then integrated and inverted. The resulting output represents a project schedule which relates the amount of money spent to the percentage of project completion.

  14. Statistical methods for the beta-binomial model in teratology.

    PubMed Central

    Yamamoto, E; Yanagimoto, T

    1994-01-01

    The beta-binomial model is widely used for analyzing teratological data involving littermates. Recent developments in statistical analyses of teratological data are briefly reviewed with emphasis on the model. For statistical inference of the parameters in the beta-binomial distribution, separation of the likelihood introduces an likelihood inference. This leads to reducing biases of estimators and also to improving accuracy of empirical significance levels of tests. Separate inference of the parameters can be conducted in a unified way. PMID:8187716

  15. The l z ( p ) * Person-Fit Statistic in an Unfolding Model Context.

    PubMed

    Tendeiro, Jorge N

    2017-01-01

    Although person-fit analysis has a long-standing tradition within item response theory, it has been applied in combination with dominance response models almost exclusively. In this article, a popular log likelihood-based parametric person-fit statistic under the framework of the generalized graded unfolding model is used. Results from a simulation study indicate that the person-fit statistic performed relatively well in detecting midpoint response style patterns and not so well in detecting extreme response style patterns.

  16. Statistical methods and neural network approaches for classification of data from multiple sources

    NASA Technical Reports Server (NTRS)

    Benediktsson, Jon Atli; Swain, Philip H.

    1990-01-01

    Statistical methods for classification of data from multiple data sources are investigated and compared to neural network models. A problem with using conventional multivariate statistical approaches for classification of data of multiple types is in general that a multivariate distribution cannot be assumed for the classes in the data sources. Another common problem with statistical classification methods is that the data sources are not equally reliable. This means that the data sources need to be weighted according to their reliability but most statistical classification methods do not have a mechanism for this. This research focuses on statistical methods which can overcome these problems: a method of statistical multisource analysis and consensus theory. Reliability measures for weighting the data sources in these methods are suggested and investigated. Secondly, this research focuses on neural network models. The neural networks are distribution free since no prior knowledge of the statistical distribution of the data is needed. This is an obvious advantage over most statistical classification methods. The neural networks also automatically take care of the problem involving how much weight each data source should have. On the other hand, their training process is iterative and can take a very long time. Methods to speed up the training procedure are introduced and investigated. Experimental results of classification using both neural network models and statistical methods are given, and the approaches are compared based on these results.

  17. Unified risk analysis of fatigue failure in ductile alloy components during all three stages of fatigue crack evolution process.

    PubMed

    Patankar, Ravindra

    2003-10-01

    Statistical fatigue life of a ductile alloy specimen is traditionally divided into three stages, namely, crack nucleation, small crack growth, and large crack growth. Crack nucleation and small crack growth show a wide variation and hence a big spread on cycles versus crack length graph. Relatively, large crack growth shows a lesser variation. Therefore, different models are fitted to the different stages of the fatigue evolution process, thus treating different stages as different phenomena. With these independent models, it is impossible to predict one phenomenon based on the information available about the other phenomenon. Experimentally, it is easier to carry out crack length measurements of large cracks compared to nucleating cracks and small cracks. Thus, it is easier to collect statistical data for large crack growth compared to the painstaking effort it would take to collect statistical data for crack nucleation and small crack growth. This article presents a fracture mechanics-based stochastic model of fatigue crack growth in ductile alloys that are commonly encountered in mechanical structures and machine components. The model has been validated by Ray (1998) for crack propagation by various statistical fatigue data. Based on the model, this article proposes a technique to predict statistical information of fatigue crack nucleation and small crack growth properties that uses the statistical properties of large crack growth under constant amplitude stress excitation. The statistical properties of large crack growth under constant amplitude stress excitation can be obtained via experiments.

  18. Incorporating GIS and remote sensing for census population disaggregation

    NASA Astrophysics Data System (ADS)

    Wu, Shuo-Sheng'derek'

    Census data are the primary source of demographic data for a variety of researches and applications. For confidentiality issues and administrative purposes, census data are usually released to the public by aggregated areal units. In the United States, the smallest census unit is census blocks. Due to data aggregation, users of census data may have problems in visualizing population distribution within census blocks and estimating population counts for areas not coinciding with census block boundaries. The main purpose of this study is to develop methodology for estimating sub-block areal populations and assessing the estimation errors. The City of Austin, Texas was used as a case study area. Based on tax parcel boundaries and parcel attributes derived from ancillary GIS and remote sensing data, detailed urban land use classes were first classified using a per-field approach. After that, statistical models by land use classes were built to infer population density from other predictor variables, including four census demographic statistics (the Hispanic percentage, the married percentage, the unemployment rate, and per capita income) and three physical variables derived from remote sensing images and building footprints vector data (a landscape heterogeneity statistics, a building pattern statistics, and a building volume statistics). In addition to statistical models, deterministic models were proposed to directly infer populations from building volumes and three housing statistics, including the average space per housing unit, the housing unit occupancy rate, and the average household size. After population models were derived or proposed, how well the models predict populations for another set of sample blocks was assessed. The results show that deterministic models were more accurate than statistical models. Further, by simulating the base unit for modeling from aggregating blocks, I assessed how well the deterministic models estimate sub-unit-level populations. I also assessed the aggregation effects and the resealing effects on sub-unit estimates. Lastly, from another set of mixed-land-use sample blocks, a mixed-land-use model was derived and compared with a residential-land-use model. The results of per-field land use classification are satisfactory with a Kappa accuracy statistics of 0.747. Model Assessments by land use show that population estimates for multi-family land use areas have higher errors than those for single-family land use areas, and population estimates for mixed land use areas have higher errors than those for residential land use areas. The assessments of sub-unit estimates using a simulation approach indicate that smaller areas show higher estimation errors, estimation errors do not relate to the base unit size, and resealing improves all levels of sub-unit estimates.

  19. New powerful statistics for alignment-free sequence comparison under a pattern transfer model.

    PubMed

    Liu, Xuemei; Wan, Lin; Li, Jing; Reinert, Gesine; Waterman, Michael S; Sun, Fengzhu

    2011-09-07

    Alignment-free sequence comparison is widely used for comparing gene regulatory regions and for identifying horizontally transferred genes. Recent studies on the power of a widely used alignment-free comparison statistic D2 and its variants D*2 and D(s)2 showed that their power approximates a limit smaller than 1 as the sequence length tends to infinity under a pattern transfer model. We develop new alignment-free statistics based on D2, D*2 and D(s)2 by comparing local sequence pairs and then summing over all the local sequence pairs of certain length. We show that the new statistics are much more powerful than the corresponding statistics and the power tends to 1 as the sequence length tends to infinity under the pattern transfer model. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. New Powerful Statistics for Alignment-free Sequence Comparison Under a Pattern Transfer Model

    PubMed Central

    Liu, Xuemei; Wan, Lin; Li, Jing; Reinert, Gesine; Waterman, Michael S.; Sun, Fengzhu

    2011-01-01

    Alignment-free sequence comparison is widely used for comparing gene regulatory regions and for identifying horizontally transferred genes. Recent studies on the power of a widely used alignment-free comparison statistic D2 and its variants D2∗ and D2s showed that their power approximates a limit smaller than 1 as the sequence length tends to infinity under a pattern transfer model. We develop new alignment-free statistics based on D2, D2∗ and D2s by comparing local sequence pairs and then summing over all the local sequence pairs of certain length. We show that the new statistics are much more powerful than the corresponding statistics and the power tends to 1 as the sequence length tends to infinity under the pattern transfer model. PMID:21723298

  1. ICD-11 and DSM-5 personality trait domains capture categorical personality disorders: Finding a common ground.

    PubMed

    Bach, Bo; Sellbom, Martin; Skjernov, Mathias; Simonsen, Erik

    2018-05-01

    The five personality disorder trait domains in the proposed International Classification of Diseases, 11th edition and the Diagnostic and Statistical Manual of Mental Disorders, 5th edition are comparable in terms of Negative Affectivity, Detachment, Antagonism/Dissociality and Disinhibition. However, the International Classification of Diseases, 11th edition model includes a separate domain of Anankastia, whereas the Diagnostic and Statistical Manual of Mental Disorders, 5th edition model includes an additional domain of Psychoticism. This study examined associations of International Classification of Diseases, 11th edition and Diagnostic and Statistical Manual of Mental Disorders, 5th edition trait domains, simultaneously, with categorical personality disorders. Psychiatric outpatients ( N = 226) were administered the Structured Clinical Interview for DSM-IV Axis II Personality Disorders Interview and the Personality Inventory for DSM-5. International Classification of Diseases, 11th edition and Diagnostic and Statistical Manual of Mental Disorders, 5th edition trait domain scores were obtained using pertinent scoring algorithms for the Personality Inventory for DSM-5. Associations between categorical personality disorders and trait domains were examined using correlation and multiple regression analyses. Both the International Classification of Diseases, 11th edition and the Diagnostic and Statistical Manual of Mental Disorders, 5th edition domain models showed relevant continuity with categorical personality disorders and captured a substantial amount of their information. As expected, the International Classification of Diseases, 11th edition model was superior in capturing obsessive-compulsive personality disorder, whereas the Diagnostic and Statistical Manual of Mental Disorders, 5th edition model was superior in capturing schizotypal personality disorder. These preliminary findings suggest that little information is 'lost' in a transition to trait domain models and potentially adds to narrowing the gap between Diagnostic and Statistical Manual of Mental Disorders, 5th edition and the proposed International Classification of Diseases, 11th edition model. Accordingly, the International Classification of Diseases, 11th edition and Diagnostic and Statistical Manual of Mental Disorders, 5th edition domain models may be used to delineate one another as well as features of familiar categorical personality disorder types. A preliminary category-to-domain 'cross walk' is provided in the article.

  2. Quantum Monte Carlo for atoms and molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnett, R.N.

    1989-11-01

    The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H{sub 2}, LiH, Li{sub 2}, and H{sub 2}O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li{sub 2}, and H{sub 2}O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations,more » the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions.« less

  3. Observation of millimeter-wave oscillations from resonant tunneling diodes and some theoretical considerations of ultimate frequency limits

    NASA Technical Reports Server (NTRS)

    Sollner, T. C. L. G.; Brown, E. R.; Goodhue, W. D.; Le, H. Q.

    1987-01-01

    Recent observations of oscillation frequencies up to 56 GHz in resonant tunneling structures are discussed in relation to calculations by several authors of the ultimate frequency limits of these devices. It is found that calculations relying on the Wentzel-Kramers-Brillouin (WKB) approximation give limits well below the observed oscillation frequencies. Two other techniques for calculating the upper frequency limit were found to give more reasonable results. One method employs the solution of the time-dependent Schroedinger equation obtained by Kundrotas and Dargys (1986); the other uses the energy width of the transmission function for electrons through the double-barrier structure. This last technique is believed to be the most accurate since it is based on general results for the lifetime of any resonant state. It gives frequency limits on the order of 1 THz for two recently fabricated structures. It appears that the primary limitation of the oscillation frequency for double-barrier resonant-tunneling diodes is imposed by intrinsic device circuit parameters and by the transit time of the depletion layer rather than by time delays encountered in the double-barrier region.

  4. The Effect on the 8th Grade Students' Attitude towards Statistics of Project Based Learning

    ERIC Educational Resources Information Center

    Koparan, Timur; Güven, Bülent

    2014-01-01

    This study investigates the effect of the project based learning approach on 8th grade students' attitude towards statistics. With this aim, an attitude scale towards statistics was developed. Quasi-experimental research model was used in this study. Following this model in the control group the traditional method was applied to teach statistics…

  5. Secondary Statistical Modeling with the National Assessment of Adult Literacy: Implications for the Design of the Background Questionnaire. Working Paper Series.

    ERIC Educational Resources Information Center

    Kaplan, David

    This paper offers recommendations to the National Center for Education Statistics (NCES) on the development of the background questionnaire for the National Assessment of Adult Literacy (NAAL). The recommendations are from the viewpoint of a researcher interested in applying sophisticated statistical models to address important issues in adult…

  6. A Two-Tiered Model for Analyzing Library Web Site Usage Statistics, Part 1: Web Server Logs.

    ERIC Educational Resources Information Center

    Cohen, Laura B.

    2003-01-01

    Proposes a two-tiered model for analyzing web site usage statistics for academic libraries: one tier for library administrators that analyzes measures indicating library use, and a second tier for web site managers that analyzes measures aiding in server maintenance and site design. Discusses the technology of web site usage statistics, and…

  7. Performance of Bootstrapping Approaches To Model Test Statistics and Parameter Standard Error Estimation in Structural Equation Modeling.

    ERIC Educational Resources Information Center

    Nevitt, Jonathan; Hancock, Gregory R.

    2001-01-01

    Evaluated the bootstrap method under varying conditions of nonnormality, sample size, model specification, and number of bootstrap samples drawn from the resampling space. Results for the bootstrap suggest the resampling-based method may be conservative in its control over model rejections, thus having an impact on the statistical power associated…

  8. Modelling Complexity: Making Sense of Leadership Issues in 14-19 Education

    ERIC Educational Resources Information Center

    Briggs, Ann R. J.

    2008-01-01

    Modelling of statistical data is a well established analytical strategy. Statistical data can be modelled to represent, and thereby predict, the forces acting upon a structure or system. For the rapidly changing systems in the world of education, modelling enables the researcher to understand, to predict and to enable decisions to be based upon…

  9. Genomic similarity and kernel methods I: advancements by building on mathematical and statistical foundations.

    PubMed

    Schaid, Daniel J

    2010-01-01

    Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1]. Copyright © 2010 S. Karger AG, Basel.

  10. Moving in Parallel Toward a Modern Modeling Epistemology: Bayes Factors and Frequentist Modeling Methods.

    PubMed

    Rodgers, Joseph Lee

    2016-01-01

    The Bayesian-frequentist debate typically portrays these statistical perspectives as opposing views. However, both Bayesian and frequentist statisticians have expanded their epistemological basis away from a singular focus on the null hypothesis, to a broader perspective involving the development and comparison of competing statistical/mathematical models. For frequentists, statistical developments such as structural equation modeling and multilevel modeling have facilitated this transition. For Bayesians, the Bayes factor has facilitated this transition. The Bayes factor is treated in articles within this issue of Multivariate Behavioral Research. The current presentation provides brief commentary on those articles and more extended discussion of the transition toward a modern modeling epistemology. In certain respects, Bayesians and frequentists share common goals.

  11. Estimating regional plant biodiversity with GIS modelling

    Treesearch

    Louis R. Iverson; Anantha M. Prasad; Anantha M. Prasad

    1998-01-01

    We analyzed a statewide species database together with a county-level geographic information system to build a model based on well-surveyed areas to estimate species richness in less surveyed counties. The model involved GIS (Arc/Info) and statistics (S-PLUS), including spatial statistics (S+SpatialStats).

  12. Model Error Estimation for the CPTEC Eta Model

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; daSilva, Arlindo

    1999-01-01

    Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.

  13. Spontaneous cortical activity reveals hallmarks of an optimal internal model of the environment.

    PubMed

    Berkes, Pietro; Orbán, Gergo; Lengyel, Máté; Fiser, József

    2011-01-07

    The brain maintains internal models of its environment to interpret sensory inputs and to prepare actions. Although behavioral studies have demonstrated that these internal models are optimally adapted to the statistics of the environment, the neural underpinning of this adaptation is unknown. Using a Bayesian model of sensory cortical processing, we related stimulus-evoked and spontaneous neural activities to inferences and prior expectations in an internal model and predicted that they should match if the model is statistically optimal. To test this prediction, we analyzed visual cortical activity of awake ferrets during development. Similarity between spontaneous and evoked activities increased with age and was specific to responses evoked by natural scenes. This demonstrates the progressive adaptation of internal models to the statistics of natural stimuli at the neural level.

  14. Probabilistic Mesomechanical Fatigue Model

    NASA Technical Reports Server (NTRS)

    Tryon, Robert G.

    1997-01-01

    A probabilistic mesomechanical fatigue life model is proposed to link the microstructural material heterogeneities to the statistical scatter in the macrostructural response. The macrostructure is modeled as an ensemble of microelements. Cracks nucleation within the microelements and grow from the microelements to final fracture. Variations of the microelement properties are defined using statistical parameters. A micromechanical slip band decohesion model is used to determine the crack nucleation life and size. A crack tip opening displacement model is used to determine the small crack growth life and size. Paris law is used to determine the long crack growth life. The models are combined in a Monte Carlo simulation to determine the statistical distribution of total fatigue life for the macrostructure. The modeled response is compared to trends in experimental observations from the literature.

  15. Modelling the effect of structural QSAR parameters on skin penetration using genetic programming

    NASA Astrophysics Data System (ADS)

    Chung, K. K.; Do, D. Q.

    2010-09-01

    In order to model relationships between chemical structures and biological effects in quantitative structure-activity relationship (QSAR) data, an alternative technique of artificial intelligence computing—genetic programming (GP)—was investigated and compared to the traditional method—statistical. GP, with the primary advantage of generating mathematical equations, was employed to model QSAR data and to define the most important molecular descriptions in QSAR data. The models predicted by GP agreed with the statistical results, and the most predictive models of GP were significantly improved when compared to the statistical models using ANOVA. Recently, artificial intelligence techniques have been applied widely to analyse QSAR data. With the capability of generating mathematical equations, GP can be considered as an effective and efficient method for modelling QSAR data.

  16. Estimating urban ground-level PM10 using MODIS 3km AOD product and meteorological parameters from WRF model

    NASA Astrophysics Data System (ADS)

    Ghotbi, Saba; Sotoudeheian, Saeed; Arhami, Mohammad

    2016-09-01

    Satellite remote sensing products of AOD from MODIS along with appropriate meteorological parameters were used to develop statistical models and estimate ground-level PM10. Most of previous studies obtained meteorological data from synoptic weather stations, with rather sparse spatial distribution, and used it along with 10 km AOD product to develop statistical models, applicable for PM variations in regional scale (resolution of ≥10 km). In the current study, meteorological parameters were simulated with 3 km resolution using WRF model and used along with the rather new 3 km AOD product (launched in 2014). The resulting PM statistical models were assessed for a polluted and largely variable urban area, Tehran, Iran. Despite the critical particulate pollution problem, very few PM studies were conducted in this area. The issue of rather poor direct PM-AOD associations existed, due to different factors such as variations in particles optical properties, in addition to bright background issue for satellite data, as the studied area located in the semi-arid areas of Middle East. Statistical approach of linear mixed effect (LME) was used, and three types of statistical models including single variable LME model (using AOD as independent variable) and multiple variables LME model by using meteorological data from two sources, WRF model and synoptic stations, were examined. Meteorological simulations were performed using a multiscale approach and creating an appropriate physic for the studied region, and the results showed rather good agreements with recordings of the synoptic stations. The single variable LME model was able to explain about 61%-73% of daily PM10 variations, reflecting a rather acceptable performance. Statistical models performance improved through using multivariable LME and incorporating meteorological data as auxiliary variables, particularly by using fine resolution outputs from WRF (R2 = 0.73-0.81). In addition, rather fine resolution for PM estimates was mapped for the studied city, and resulting concentration maps were consistent with PM recordings at the existing stations.

  17. Strengthen forensic entomology in court--the need for data exploration and the validation of a generalised additive mixed model.

    PubMed

    Baqué, Michèle; Amendt, Jens

    2013-01-01

    Developmental data of juvenile blow flies (Diptera: Calliphoridae) are typically used to calculate the age of immature stages found on or around a corpse and thus to estimate a minimum post-mortem interval (PMI(min)). However, many of those data sets don't take into account that immature blow flies grow in a non-linear fashion. Linear models do not supply a sufficient reliability on age estimates and may even lead to an erroneous determination of the PMI(min). According to the Daubert standard and the need for improvements in forensic science, new statistic tools like smoothing methods and mixed models allow the modelling of non-linear relationships and expand the field of statistical analyses. The present study introduces into the background and application of these statistical techniques by analysing a model which describes the development of the forensically important blow fly Calliphora vicina at different temperatures. The comparison of three statistical methods (linear regression, generalised additive modelling and generalised additive mixed modelling) clearly demonstrates that only the latter provided regression parameters that reflect the data adequately. We focus explicitly on both the exploration of the data--to assure their quality and to show the importance of checking it carefully prior to conducting the statistical tests--and the validation of the resulting models. Hence, we present a common method for evaluating and testing forensic entomological data sets by using for the first time generalised additive mixed models.

  18. Searching for hidden unexpected features in the SnIa data

    NASA Astrophysics Data System (ADS)

    Shafieloo, A.; Perivolaropoulos, L.

    2010-06-01

    It is known that κ2 statistic and likelihood analysis may not be sensitive to the all features of the data. Despite of the fact that by using κ2 statistic we can measure the overall goodness of fit for a model confronted to a data set, some specific features of the data can stay undetectable. For instance, it has been pointed out that there is an unexpected brightness of the SnIa data at z > 1 in the Union compilation. We quantify this statement by constructing a new statistic, called Binned Normalized Difference (BND) statistic, which is applicable directly on the Type Ia Supernova (SnIa) distance moduli. This statistic is designed to pick up systematic brightness trends of SnIa data points with respect to a best fit cosmological model at high redshifts. According to this statistic there are 2.2%, 5.3% and 12.6% consistency between the Gold06, Union08 and Constitution09 data and spatially flat ΛCDM model when the real data is compared with many realizations of the simulated monte carlo datasets. The corresponding realization probability in the context of a (w0,w1) = (-1.4,2) model is more than 30% for all mentioned datasets indicating a much better consistency for this model with respect to the BND statistic. The unexpected high z brightness of SnIa can be interpreted either as a trend towards more deceleration at high z than expected in the context of ΛCDM or as a statistical fluctuation or finally as a systematic effect perhaps due to a mild SnIa evolution at high z.

  19. Comparing the Fit of Item Response Theory and Factor Analysis Models

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Alberto; Cai, Li; Hernandez, Adolfo

    2011-01-01

    Linear factor analysis (FA) models can be reliably tested using test statistics based on residual covariances. We show that the same statistics can be used to reliably test the fit of item response theory (IRT) models for ordinal data (under some conditions). Hence, the fit of an FA model and of an IRT model to the same data set can now be…

  20. A stochastic fractional dynamics model of space-time variability of rain

    NASA Astrophysics Data System (ADS)

    Kundu, Prasun K.; Travis, James E.

    2013-09-01

    varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, which allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and time scales. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and on the Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to fit the second moment statistics of radar data at the smaller spatiotemporal scales. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well at these scales without any further adjustment.

  1. The relationship between the C-statistic of a risk-adjustment model and the accuracy of hospital report cards: a Monte Carlo Study.

    PubMed

    Austin, Peter C; Reeves, Mathew J

    2013-03-01

    Hospital report cards, in which outcomes following the provision of medical or surgical care are compared across health care providers, are being published with increasing frequency. Essential to the production of these reports is risk-adjustment, which allows investigators to account for differences in the distribution of patient illness severity across different hospitals. Logistic regression models are frequently used for risk adjustment in hospital report cards. Many applied researchers use the c-statistic (equivalent to the area under the receiver operating characteristic curve) of the logistic regression model as a measure of the credibility and accuracy of hospital report cards. To determine the relationship between the c-statistic of a risk-adjustment model and the accuracy of hospital report cards. Monte Carlo simulations were used to examine this issue. We examined the influence of 3 factors on the accuracy of hospital report cards: the c-statistic of the logistic regression model used for risk adjustment, the number of hospitals, and the number of patients treated at each hospital. The parameters used to generate the simulated datasets came from analyses of patients hospitalized with a diagnosis of acute myocardial infarction in Ontario, Canada. The c-statistic of the risk-adjustment model had, at most, a very modest impact on the accuracy of hospital report cards, whereas the number of patients treated at each hospital had a much greater impact. The c-statistic of a risk-adjustment model should not be used to assess the accuracy of a hospital report card.

  2. The relationship between the c-statistic of a risk-adjustment model and the accuracy of hospital report cards: A Monte Carlo study

    PubMed Central

    Austin, Peter C.; Reeves, Mathew J.

    2015-01-01

    Background Hospital report cards, in which outcomes following the provision of medical or surgical care are compared across health care providers, are being published with increasing frequency. Essential to the production of these reports is risk-adjustment, which allows investigators to account for differences in the distribution of patient illness severity across different hospitals. Logistic regression models are frequently used for risk-adjustment in hospital report cards. Many applied researchers use the c-statistic (equivalent to the area under the receiver operating characteristic curve) of the logistic regression model as a measure of the credibility and accuracy of hospital report cards. Objectives To determine the relationship between the c-statistic of a risk-adjustment model and the accuracy of hospital report cards. Research Design Monte Carlo simulations were used to examine this issue. We examined the influence of three factors on the accuracy of hospital report cards: the c-statistic of the logistic regression model used for risk-adjustment, the number of hospitals, and the number of patients treated at each hospital. The parameters used to generate the simulated datasets came from analyses of patients hospitalized with a diagnosis of acute myocardial infarction in Ontario, Canada. Results The c-statistic of the risk-adjustment model had, at most, a very modest impact on the accuracy of hospital report cards, whereas the number of patients treated at each hospital had a much greater impact. Conclusions The c-statistic of a risk-adjustment model should not be used to assess the accuracy of a hospital report card. PMID:23295579

  3. A simple rain attenuation model for earth-space radio links operating at 10-35 GHz

    NASA Technical Reports Server (NTRS)

    Stutzman, W. L.; Yon, K. M.

    1986-01-01

    The simple attenuation model has been improved from an earlier version and now includes the effect of wave polarization. The model is for the prediction of rain attenuation statistics on earth-space communication links operating in the 10-35 GHz band. Simple calculations produce attenuation values as a function of average rain rate. These together with rain rate statistics (either measured or predicted) can be used to predict annual rain attenuation statistics. In this paper model predictions are compared to measured data from a data base of 62 experiments performed in the U.S., Europe, and Japan. Comparisons are also made to predictions from other models.

  4. New approach in the quantum statistical parton distribution

    NASA Astrophysics Data System (ADS)

    Sohaily, Sozha; Vaziri (Khamedi), Mohammad

    2017-12-01

    An attempt to find simple parton distribution functions (PDFs) based on quantum statistical approach is presented. The PDFs described by the statistical model have very interesting physical properties which help to understand the structure of partons. The longitudinal portion of distribution functions are given by applying the maximum entropy principle. An interesting and simple approach to determine the statistical variables exactly without fitting and fixing parameters is surveyed. Analytic expressions of the x-dependent PDFs are obtained in the whole x region [0, 1], and the computed distributions are consistent with the experimental observations. The agreement with experimental data, gives a robust confirm of our simple presented statistical model.

  5. How Statisticians Speak Risk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Redus, K.S.

    2007-07-01

    The foundation of statistics deals with (a) how to measure and collect data and (b) how to identify models using estimates of statistical parameters derived from the data. Risk is a term used by the statistical community and those that employ statistics to express the results of a statistically based study. Statistical risk is represented as a probability that, for example, a statistical model is sufficient to describe a data set; but, risk is also interpreted as a measure of worth of one alternative when compared to another. The common thread of any risk-based problem is the combination of (a)more » the chance an event will occur, with (b) the value of the event. This paper presents an introduction to, and some examples of, statistical risk-based decision making from a quantitative, visual, and linguistic sense. This should help in understanding areas of radioactive waste management that can be suitably expressed using statistical risk and vice-versa. (authors)« less

  6. SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.

    PubMed

    Chu, Annie; Cui, Jenny; Dinov, Ivo D

    2009-03-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models.

  7. Computationally efficient statistical differential equation modeling using homogenization

    USGS Publications Warehouse

    Hooten, Mevin B.; Garlick, Martha J.; Powell, James A.

    2013-01-01

    Statistical models using partial differential equations (PDEs) to describe dynamically evolving natural systems are appearing in the scientific literature with some regularity in recent years. Often such studies seek to characterize the dynamics of temporal or spatio-temporal phenomena such as invasive species, consumer-resource interactions, community evolution, and resource selection. Specifically, in the spatial setting, data are often available at varying spatial and temporal scales. Additionally, the necessary numerical integration of a PDE may be computationally infeasible over the spatial support of interest. We present an approach to impose computationally advantageous changes of support in statistical implementations of PDE models and demonstrate its utility through simulation using a form of PDE known as “ecological diffusion.” We also apply a statistical ecological diffusion model to a data set involving the spread of mountain pine beetle (Dendroctonus ponderosae) in Idaho, USA.

  8. Comparisons of non-Gaussian statistical models in DNA methylation analysis.

    PubMed

    Ma, Zhanyu; Teschendorff, Andrew E; Yu, Hong; Taghia, Jalil; Guo, Jun

    2014-06-16

    As a key regulatory mechanism of gene expression, DNA methylation patterns are widely altered in many complex genetic diseases, including cancer. DNA methylation is naturally quantified by bounded support data; therefore, it is non-Gaussian distributed. In order to capture such properties, we introduce some non-Gaussian statistical models to perform dimension reduction on DNA methylation data. Afterwards, non-Gaussian statistical model-based unsupervised clustering strategies are applied to cluster the data. Comparisons and analysis of different dimension reduction strategies and unsupervised clustering methods are presented. Experimental results show that the non-Gaussian statistical model-based methods are superior to the conventional Gaussian distribution-based method. They are meaningful tools for DNA methylation analysis. Moreover, among several non-Gaussian methods, the one that captures the bounded nature of DNA methylation data reveals the best clustering performance.

  9. Comparisons of Non-Gaussian Statistical Models in DNA Methylation Analysis

    PubMed Central

    Ma, Zhanyu; Teschendorff, Andrew E.; Yu, Hong; Taghia, Jalil; Guo, Jun

    2014-01-01

    As a key regulatory mechanism of gene expression, DNA methylation patterns are widely altered in many complex genetic diseases, including cancer. DNA methylation is naturally quantified by bounded support data; therefore, it is non-Gaussian distributed. In order to capture such properties, we introduce some non-Gaussian statistical models to perform dimension reduction on DNA methylation data. Afterwards, non-Gaussian statistical model-based unsupervised clustering strategies are applied to cluster the data. Comparisons and analysis of different dimension reduction strategies and unsupervised clustering methods are presented. Experimental results show that the non-Gaussian statistical model-based methods are superior to the conventional Gaussian distribution-based method. They are meaningful tools for DNA methylation analysis. Moreover, among several non-Gaussian methods, the one that captures the bounded nature of DNA methylation data reveals the best clustering performance. PMID:24937687

  10. Structured statistical models of inductive reasoning.

    PubMed

    Kemp, Charles; Tenenbaum, Joshua B

    2009-01-01

    Everyday inductive inferences are often guided by rich background knowledge. Formal models of induction should aim to incorporate this knowledge and should explain how different kinds of knowledge lead to the distinctive patterns of reasoning found in different inductive contexts. This article presents a Bayesian framework that attempts to meet both goals and describes [corrected] 4 applications of the framework: a taxonomic model, a spatial model, a threshold model, and a causal model. Each model makes probabilistic inferences about the extensions of novel properties, but the priors for the 4 models are defined over different kinds of structures that capture different relationships between the categories in a domain. The framework therefore shows how statistical inference can operate over structured background knowledge, and the authors argue that this interaction between structure and statistics is critical for explaining the power and flexibility of human reasoning.

  11. Ultrasound image filtering using the mutiplicative model

    NASA Astrophysics Data System (ADS)

    Navarrete, Hugo; Frery, Alejandro C.; Sanchez, Fermin; Anto, Joan

    2002-04-01

    Ultrasound images, as a special case of coherent images, are normally corrupted with multiplicative noise i.e. speckle noise. Speckle noise reduction is a difficult task due to its multiplicative nature, but good statistical models of speckle formation are useful to design adaptive speckle reduction filters. In this article a new statistical model, emerging from the Multiplicative Model framework, is presented and compared to previous models (Rayleigh, Rice and K laws). It is shown that the proposed model gives the best performance when modeling the statistics of ultrasound images. Finally, the parameters of the model can be used to quantify the extent of speckle formation; this quantification is applied to adaptive speckle reduction filter design. The effectiveness of the filter is demonstrated on typical in-vivo log-compressed B-scan images obtained by a clinical ultrasound system.

  12. A Survey of Statistical Models for Reverse Engineering Gene Regulatory Networks

    PubMed Central

    Huang, Yufei; Tienda-Luna, Isabel M.; Wang, Yufeng

    2009-01-01

    Statistical models for reverse engineering gene regulatory networks are surveyed in this article. To provide readers with a system-level view of the modeling issues in this research, a graphical modeling framework is proposed. This framework serves as the scaffolding on which the review of different models can be systematically assembled. Based on the framework, we review many existing models for many aspects of gene regulation; the pros and cons of each model are discussed. In addition, network inference algorithms are also surveyed under the graphical modeling framework by the categories of point solutions and probabilistic solutions and the connections and differences among the algorithms are provided. This survey has the potential to elucidate the development and future of reverse engineering GRNs and bring statistical signal processing closer to the core of this research. PMID:20046885

  13. A question of separation: disentangling tracer bias and gravitational non-linearity with counts-in-cells statistics

    NASA Astrophysics Data System (ADS)

    Uhlemann, C.; Feix, M.; Codis, S.; Pichon, C.; Bernardeau, F.; L'Huillier, B.; Kim, J.; Hong, S. E.; Laigle, C.; Park, C.; Shin, J.; Pogosyan, D.

    2018-02-01

    Starting from a very accurate model for density-in-cells statistics of dark matter based on large deviation theory, a bias model for the tracer density in spheres is formulated. It adopts a mean bias relation based on a quadratic bias model to relate the log-densities of dark matter to those of mass-weighted dark haloes in real and redshift space. The validity of the parametrized bias model is established using a parametrization-independent extraction of the bias function. This average bias model is then combined with the dark matter PDF, neglecting any scatter around it: it nevertheless yields an excellent model for densities-in-cells statistics of mass tracers that is parametrized in terms of the underlying dark matter variance and three bias parameters. The procedure is validated on measurements of both the one- and two-point statistics of subhalo densities in the state-of-the-art Horizon Run 4 simulation showing excellent agreement for measured dark matter variance and bias parameters. Finally, it is demonstrated that this formalism allows for a joint estimation of the non-linear dark matter variance and the bias parameters using solely the statistics of subhaloes. Having verified that galaxy counts in hydrodynamical simulations sampled on a scale of 10 Mpc h-1 closely resemble those of subhaloes, this work provides important steps towards making theoretical predictions for density-in-cells statistics applicable to upcoming galaxy surveys like Euclid or WFIRST.

  14. Modeling Cross-Situational Word-Referent Learning: Prior Questions

    ERIC Educational Resources Information Center

    Yu, Chen; Smith, Linda B.

    2012-01-01

    Both adults and young children possess powerful statistical computation capabilities--they can infer the referent of a word from highly ambiguous contexts involving many words and many referents by aggregating cross-situational statistical information across contexts. This ability has been explained by models of hypothesis testing and by models of…

  15. Statistical models for the analysis and design of digital polymerase chain (dPCR) experiments

    USGS Publications Warehouse

    Dorazio, Robert; Hunter, Margaret

    2015-01-01

    Statistical methods for the analysis and design of experiments using digital PCR (dPCR) have received only limited attention and have been misused in many instances. To address this issue and to provide a more general approach to the analysis of dPCR data, we describe a class of statistical models for the analysis and design of experiments that require quantification of nucleic acids. These models are mathematically equivalent to generalized linear models of binomial responses that include a complementary, log–log link function and an offset that is dependent on the dPCR partition volume. These models are both versatile and easy to fit using conventional statistical software. Covariates can be used to specify different sources of variation in nucleic acid concentration, and a model’s parameters can be used to quantify the effects of these covariates. For purposes of illustration, we analyzed dPCR data from different types of experiments, including serial dilution, evaluation of copy number variation, and quantification of gene expression. We also showed how these models can be used to help design dPCR experiments, as in selection of sample sizes needed to achieve desired levels of precision in estimates of nucleic acid concentration or to detect differences in concentration among treatments with prescribed levels of statistical power.

  16. Dynamic modelling of n-of-1 data: powerful and flexible data analytics applied to individualised studies.

    PubMed

    Vieira, Rute; McDonald, Suzanne; Araújo-Soares, Vera; Sniehotta, Falko F; Henderson, Robin

    2017-09-01

    N-of-1 studies are based on repeated observations within an individual or unit over time and are acknowledged as an important research method for generating scientific evidence about the health or behaviour of an individual. Statistical analyses of n-of-1 data require accurate modelling of the outcome while accounting for its distribution, time-related trend and error structures (e.g., autocorrelation) as well as reporting readily usable contextualised effect sizes for decision-making. A number of statistical approaches have been documented but no consensus exists on which method is most appropriate for which type of n-of-1 design. We discuss the statistical considerations for analysing n-of-1 studies and briefly review some currently used methodologies. We describe dynamic regression modelling as a flexible and powerful approach, adaptable to different types of outcomes and capable of dealing with the different challenges inherent to n-of-1 statistical modelling. Dynamic modelling borrows ideas from longitudinal and event history methodologies which explicitly incorporate the role of time and the influence of past on future. We also present an illustrative example of the use of dynamic regression on monitoring physical activity during the retirement transition. Dynamic modelling has the potential to expand researchers' access to robust and user-friendly statistical methods for individualised studies.

  17. Comparison of Artificial Neural Networks and ARIMA statistical models in simulations of target wind time series

    NASA Astrophysics Data System (ADS)

    Kolokythas, Kostantinos; Vasileios, Salamalikis; Athanassios, Argiriou; Kazantzidis, Andreas

    2015-04-01

    The wind is a result of complex interactions of numerous mechanisms taking place in small or large scales, so, the better knowledge of its behavior is essential in a variety of applications, especially in the field of power production coming from wind turbines. In the literature there is a considerable number of models, either physical or statistical ones, dealing with the problem of simulation and prediction of wind speed. Among others, Artificial Neural Networks (ANNs) are widely used for the purpose of wind forecasting and, in the great majority of cases, outperform other conventional statistical models. In this study, a number of ANNs with different architectures, which have been created and applied in a dataset of wind time series, are compared to Auto Regressive Integrated Moving Average (ARIMA) statistical models. The data consist of mean hourly wind speeds coming from a wind farm on a hilly Greek region and cover a period of one year (2013). The main goal is to evaluate the models ability to simulate successfully the wind speed at a significant point (target). Goodness-of-fit statistics are performed for the comparison of the different methods. In general, the ANN showed the best performance in the estimation of wind speed prevailing over the ARIMA models.

  18. Bureau of Labor Statistics Employment Projections: Detailed Analysis of Selected Occupations and Industries. Report to the Honorable Berkley Bedell, United States House of Representatives.

    ERIC Educational Resources Information Center

    General Accounting Office, Washington, DC.

    To compile its projections of future employment levels, the Bureau of Labor Statistics (BLS) combines the following five interlinked models in a six-step process: a labor force model, an econometric model of the U.S. economy, an industry activity model, an industry labor demand model, and an occupational labor demand model. The BLS was asked to…

  19. Statistical analysis of modeling error in structural dynamic systems

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, J. D.

    1990-01-01

    The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.

  20. Improving UWB-Based Localization in IoT Scenarios with Statistical Models of Distance Error.

    PubMed

    Monica, Stefania; Ferrari, Gianluigi

    2018-05-17

    Interest in the Internet of Things (IoT) is rapidly increasing, as the number of connected devices is exponentially growing. One of the application scenarios envisaged for IoT technologies involves indoor localization and context awareness. In this paper, we focus on a localization approach that relies on a particular type of communication technology, namely Ultra Wide Band (UWB). UWB technology is an attractive choice for indoor localization, owing to its high accuracy. Since localization algorithms typically rely on estimated inter-node distances, the goal of this paper is to evaluate the improvement brought by a simple (linear) statistical model of the distance error. On the basis of an extensive experimental measurement campaign, we propose a general analytical framework, based on a Least Square (LS) method, to derive a novel statistical model for the range estimation error between a pair of UWB nodes. The proposed statistical model is then applied to improve the performance of a few illustrative localization algorithms in various realistic scenarios. The obtained experimental results show that the use of the proposed statistical model improves the accuracy of the considered localization algorithms with a reduction of the localization error up to 66%.

Top