NASA Astrophysics Data System (ADS)
Chernikova, Dina; Axell, Kåre; Avdic, Senada; Pázsit, Imre; Nordlund, Anders; Allard, Stefan
2015-05-01
Two versions of the neutron-gamma variance to mean (Feynman-alpha method or Feynman-Y function) formula for either gamma detection only or total neutron-gamma detection, respectively, are derived and compared in this paper. The new formulas have particular importance for detectors of either gamma photons or detectors sensitive to both neutron and gamma radiation. If applied to a plastic or liquid scintillation detector, the total neutron-gamma detection Feynman-Y expression corresponds to a situation where no discrimination is made between neutrons and gamma particles. The gamma variance to mean formulas are useful when a detector of only gamma radiation is used or when working with a combined neutron-gamma detector at high count rates. The theoretical derivation is based on the Chapman-Kolmogorov equation with the inclusion of general reactions and corresponding intensities for neutrons and gammas, but with the inclusion of prompt reactions only. A one energy group approximation is considered. The comparison of the two different theories is made by using reaction intensities obtained in MCNPX simulations with a simplified geometry for two scintillation detectors and a 252Cf-source. In addition, the variance to mean ratios, neutron, gamma and total neutron-gamma are evaluated experimentally for a weak 252Cf neutron-gamma source, a 137Cs random gamma source and a 22Na correlated gamma source. Due to the focus being on the possibility of using neutron-gamma variance to mean theories for both reactor and safeguards applications, we limited the present study to the general analytical expressions for Feynman-alpha formulas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Croft, Stephen; Santi, Peter A.; Henzlova, Daniela
The Feynman-Y statistic is a type of autocorrelation analysis. It is defined as the excess variance-to-mean ratio, Y = VMR - 1, of the number count distribution formed by sampling a pulse train using a series of non-overlapping gates. It is a measure of the degree of correlation present on the pulse train with Y = 0 for Poisson data. In the context of neutron coincidence counting we show that the same information can be obtained from the accidentals histogram acquired using the multiplicity shift-register method, which is currently the common autocorrelation technique applied in nuclear safeguards. In the casemore » of multiplicity shift register analysis however, overlapping gates, either triggered by the incoming pulse stream or by a periodic clock, are used. The overlap introduces additional covariance but does not alter the expectation values. In this paper we discuss, for a particular data set, the relative merit of the Feynman and shift-register methods in terms of both precision and dead time correction. Traditionally the Feynman approach is applied with a relatively long gate width compared to the dieaway time. The main reason for this is so that the gate utilization factor can be taken as unity rather than being treated as a system parameter to be determined at characterization/calibration. But because the random trigger interval gate utilization factor is slow to saturate this procedure requires a gate width many times the effective 1/e dieaway time. In the traditional approach this limits the number of gates that can be fitted into a given assay duration. We empirically show that much shorter gates, similar in width to those used in traditional shift register analysis can be used. Because the way in which the correlated information present on the pulse train is extracted is different for the moments based method of Feynman and the various shift register based approaches, the dead time losses are manifested differently for these two approaches. The resulting estimates for the dead time corrected first and second order reduced factorial moments should be independent of the method however and this allows the respective dead time formalism to be checked. We discuss how to make dead time corrections in both the shift register and the Feynman approaches.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soltz, R. A.; Danagoulian, A.; Sheets, S.
Theoretical calculations indicate that the value of the Feynman variance, Y2F for the emitted distribution of neutrons from ssionable exhibits a strong monotonic de- pendence on a the multiplication, M, of a quantity of special nuclear material. In 2012 we performed a series of measurements at the Passport Inc. facility using a 9- MeV bremsstrahlung CW beam of photons incident on small quantities of uranium with liquid scintillator detectors. For the set of objects studies we observed deviations in the expected monotonic dependence, and these deviations were later con rmed by MCNP simulations. In this report, we modify the theorymore » to account for the contri- bution from the initial photo- ssion and benchmark the new theory with a series of MCNP simulations on DU, LEU, and HEU objects spanning a wide range of masses and multiplication values.« less
NASA Astrophysics Data System (ADS)
Luna Acosta, German Aurelio
The masses of observed hadrons are fitted according to the kinematic predictions of Conformal Relativity. The hypothesis gives a remarkably good fit. The isospin SU(2) gauge invariant Lagrangian L(,(pi)NN)(x,(lamda)) is used in the calculation of d(sigma)/d(OMEGA) to 2nd-order Feynman graphs for simplified models of (pi)N(--->)(pi)N. The resulting infinite mass sums over the nucleon (Conformal) families are done via the Generalized-Sommerfeld-Watson Transform Theorem. Even though the models are too simple to be realistic, they indicate that if (DELTA)-internal lines were to be included, 2nd-order Feynman graphs may reproduce the experimental data qualitatively. The energy -dependence of the propagator and couplings in Conformal QFT is different from that of ordinary QFT. Suggestions for further work are made in the areas of ultra-violet divergences and OPEC calculations.
Neutron noise measurements at the Delphi subcritical assembly
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szieberth, M.; Klujber, G.; Kloosterman, J. L.
2012-07-01
The paper presents the results and evaluations of a comprehensive set of neutron noise measurements on the Delphi subcritical assembly of the Delft Univ. of Technology. The measurements investigated the effect of different source distributions (inherent spontaneous fission and {sup 252}Cf) and the position of the detectors applied (both radially and vertically). The evaluation of the measured data has been performed by the variance-to-mean ratio (VTMR, Feynman-{alpha}), the autocorrelation (ACF, Rossi-{alpha}) and the cross-correlation (CCF) methods. The values obtained for the prompt decay constant show a strong bias, which depends both on the detector position and on the source distribution.more » This is due to the presence of higher modes in the system. It has been observed that the {alpha} value fitted is higher when the detector is close to the boundary of the core or to the {sup 252}Cf point-source. The higher alpha-modes have also been observed by fitting functions describing two alpha-modes. The successful set of measurement also provides a good basis for further theoretical investigations including the Monte Carlo simulation of the noise measurements and the calculation of the alpha-modes in the Delphi subcritical assembly. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butko, Yana A., E-mail: yanabutko@yandex.ru, E-mail: kinderknecht@math.uni-sb.de; Grothaus, Martin, E-mail: grothaus@mathematik.uni-kl.de; Smolyanov, Oleg G., E-mail: Smolyanov@yandex.ru
2016-02-15
Evolution semigroups generated by pseudo-differential operators are considered. These operators are obtained by different (parameterized by a number τ) procedures of quantization from a certain class of functions (or symbols) defined on the phase space. This class contains Hamilton functions of particles with variable mass in magnetic and potential fields and more general symbols given by the Lévy-Khintchine formula. The considered semigroups are represented as limits of n-fold iterated integrals when n tends to infinity. Such representations are called Feynman formulae. Some of these representations are constructed with the help of another pseudo-differential operator, obtained by the same procedure ofmore » quantization; such representations are called Hamiltonian Feynman formulae. Some representations are based on integral operators with elementary kernels; these are called Lagrangian Feynman formulae. Langrangian Feynman formulae provide approximations of evolution semigroups, suitable for direct computations and numerical modeling of the corresponding dynamics. Hamiltonian Feynman formulae allow to represent the considered semigroups by means of Feynman path integrals. In the article, a family of phase space Feynman pseudomeasures corresponding to different procedures of quantization is introduced. The considered evolution semigroups are represented as phase space Feynman path integrals with respect to these Feynman pseudomeasures, i.e., different quantizations correspond to Feynman path integrals with the same integrand but with respect to different pseudomeasures. This answers Berezin’s problem of distinguishing a procedure of quantization on the language of Feynman path integrals. Moreover, the obtained Lagrangian Feynman formulae allow also to calculate these phase space Feynman path integrals and to connect them with some functional integrals with respect to probability measures.« less
Dominance Genetic Variance for Traits Under Directional Selection in Drosophila serrata
Sztepanacz, Jacqueline L.; Blows, Mark W.
2015-01-01
In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait–fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. PMID:25783700
A Note on the Stochastic Nature of Feynman Quantum Paths
NASA Astrophysics Data System (ADS)
Botelho, Luiz C. L.
2016-11-01
We propose a Fresnel stochastic white noise framework to analyze the stochastic nature of the Feynman paths entering on the Feynman Path Integral expression for the Feynman Propagator of a particle quantum mechanically moving under a time-independent potential.
A Note on Feynman Path Integral for Electromagnetic External Fields
NASA Astrophysics Data System (ADS)
Botelho, Luiz C. L.
2017-08-01
We propose a Fresnel stochastic white noise framework to analyze the nature of the Feynman paths entering on the Feynman Path Integral expression for the Feynman Propagator of a particle quantum mechanically moving under an external electromagnetic time-independent potential.
Dominance genetic variance for traits under directional selection in Drosophila serrata.
Sztepanacz, Jacqueline L; Blows, Mark W
2015-05-01
In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait-fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. Copyright © 2015 by the Genetics Society of America.
A Celebration of Richard Feynman
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feynman, Richard
In honor of the 2005 World Year of Physics, on the birthday of Nobel Prize-winning physicist Richard Feynman, BSA sponsored this celebration. Actor Norman Parker reads from Feynman's bestselling books, and Ralph Leighton and Tom Rutishauser, who played bongos with Feynman, reminisce on what it was like to drum with him.
A Celebration of Richard Feynman
Feynman, Richard
2018-01-05
In honor of the 2005 World Year of Physics, on the birthday of Nobel Prize-winning physicist Richard Feynman, BSA sponsored this celebration. Actor Norman Parker reads from Feynman's bestselling books, and Ralph Leighton and Tom Rutishauser, who played bongos with Feynman, reminisce on what it was like to drum with him.
Richard Feynman and computation
NASA Astrophysics Data System (ADS)
Hey, Tony
1999-04-01
The enormous contribution of Richard Feynman to modern physics is well known, both to teaching through his famous Feynman Lectures on Physics, and to research with his Feynman diagram approach to quantum field theory and his path integral formulation of quantum mechanics. Less well known perhaps is his long-standing interest in the physics of computation and this is the subject of this paper. Feynman lectured on computation at Caltech for most of the last decade of his life, first with John Hopfield and Carver Mead, and then with Gerry Sussman. The story of how these lectures came to be written up as the Feynman Lectures on Computation is briefly recounted. Feynman also discussed the fundamentals of computation with other legendary figures of the computer science and physics community such as Ed Fredkin, Rolf Landauer, Carver Mead, Marvin Minsky and John Wheeler. He was also instrumental in stimulating developments in both nanotechnology and quantum computing. During the 1980s Feynman re-visited long-standing interests both in parallel computing with Geoffrey Fox and Danny Hillis, and in reversible computation and quantum computing with Charles Bennett, Norman Margolus, Tom Toffoli and Wojciech Zurek. This paper records Feynman's links with the computational community and includes some reminiscences about his involvement with the fundamentals of computing.
NASA Astrophysics Data System (ADS)
Shekhar, Karthik; Ruberman, Claire F.; Ferguson, Andrew L.; Barton, John P.; Kardar, Mehran; Chakraborty, Arup K.
2013-12-01
Mutational escape from vaccine-induced immune responses has thwarted the development of a successful vaccine against AIDS, whose causative agent is HIV, a highly mutable virus. Knowing the virus' fitness as a function of its proteomic sequence can enable rational design of potent vaccines, as this information can focus vaccine-induced immune responses to target mutational vulnerabilities of the virus. Spin models have been proposed as a means to infer intrinsic fitness landscapes of HIV proteins from patient-derived viral protein sequences. These sequences are the product of nonequilibrium viral evolution driven by patient-specific immune responses and are subject to phylogenetic constraints. How can such sequence data allow inference of intrinsic fitness landscapes? We combined computer simulations and variational theory á la Feynman to show that, in most circumstances, spin models inferred from patient-derived viral sequences reflect the correct rank order of the fitness of mutant viral strains. Our findings are relevant for diverse viruses.
Feynman diagrams and rooted maps
NASA Astrophysics Data System (ADS)
Prunotto, Andrea; Alberico, Wanda Maria; Czerski, Piotr
2018-04-01
The rooted maps theory, a branch of the theory of homology, is shown to be a powerful tool for investigating the topological properties of Feynman diagrams, related to the single particle propagator in the quantum many-body systems. The numerical correspondence between the number of this class of Feynman diagrams as a function of perturbative order and the number of rooted maps as a function of the number of edges is studied. A graphical procedure to associate Feynman diagrams and rooted maps is then stated. Finally, starting from rooted maps principles, an original definition of the genus of a Feynman diagram, which totally differs from the usual one, is given.
The contribution of the mitochondrial genome to sex-specific fitness variance.
Smith, Shane R T; Connallon, Tim
2017-05-01
Maternal inheritance of mitochondrial DNA (mtDNA) facilitates the evolutionary accumulation of mutations with sex-biased fitness effects. Whereas maternal inheritance closely aligns mtDNA evolution with natural selection in females, it makes it indifferent to evolutionary changes that exclusively benefit males. The constrained response of mtDNA to selection in males can lead to asymmetries in the relative contributions of mitochondrial genes to female versus male fitness variation. Here, we examine the impact of genetic drift and the distribution of fitness effects (DFE) among mutations-including the correlation of mutant fitness effects between the sexes-on mitochondrial genetic variation for fitness. We show how drift, genetic correlations, and skewness of the DFE determine the relative contributions of mitochondrial genes to male versus female fitness variance. When mutant fitness effects are weakly correlated between the sexes, and the effective population size is large, mitochondrial genes should contribute much more to male than to female fitness variance. In contrast, high fitness correlations and small population sizes tend to equalize the contributions of mitochondrial genes to female versus male variance. We discuss implications of these results for the evolution of mitochondrial genome diversity and the genetic architecture of female and male fitness. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
Richard P. Feynman Center for Innovation
Search Site submit About Us Los Alamos National LaboratoryRichard P. Feynman Center for Innovation Innovation protecting tomorrow Los Alamos National Laboratory The Richard P. Feynman Center for Innovation self-healing, self-forming mesh network of long range radios. READ MORE supercomputer Los Alamos
A Model for Bilingual Physics Teaching: "The Feynman Lectures "
NASA Astrophysics Data System (ADS)
Metzner, Heqing W.
2006-12-01
Feynman was not only a great physicist but also a remarkably effective educator. The Feynman Lectures on Physics originally published in 1963 were designed to be GUIDES for teachers and for gifted students. More than 40 years later, his peculiar teaching ideas have special application to bilingual physics teaching in China because: (1) Each individual lecture provides a self contained unit for bilingual teaching; (2)The lectures broaden the physics understanding of students; and (3)Feynman's original thought in English is experienced through the bilingual teaching of physics.
Validating the Proton Prediction System (PPS)
2006-12-01
hazards for astro - proton fluence model (Feynman et al., 2002) fits nauts on the missions to the Moon and Mars observed SEP event fluences of E>10MeV...events limited the useful PPS test cases to 78 of the J(E>10MeV) = 347 x ( Fx )0.941, (3) 101 solar flares. Although they can be serious radiation...hazards (Reames, 1999), PPS does not where Fx is the GOES 1-8 A X-ray flare half-power predict the E> 10MeV peaks often seen during the fluence in J cm -2
Revisiting Feynman's ratchet with thermoelectric transport theory.
Apertet, Y; Ouerdane, H; Goupil, C; Lecoeur, Ph
2014-07-01
We show how the formalism used for thermoelectric transport may be adapted to Smoluchowski's seminal thought experiment, also known as Feynman's ratchet and pawl system. Our analysis rests on the notion of useful flux, which for a thermoelectric system is the electrical current and for Feynman's ratchet is the effective jump frequency. Our approach yields original insight into the derivation and analysis of the system's properties. In particular we define an entropy per tooth in analogy with the entropy per carrier or Seebeck coefficient, and we derive the analog to Kelvin's second relation for Feynman's ratchet. Owing to the formal similarity between the heat fluxes balance equations for a thermoelectric generator (TEG) and those for Feynman's ratchet, we introduce a distribution parameter γ that quantifies the amount of heat that flows through the cold and hot sides of both heat engines. While it is well established that γ = 1/2 for a TEG, it is equal to 1 for Feynman's ratchet. This implies that no heat may be rejected in the cold reservoir for the latter case. Further, the analysis of the efficiency at maximum power shows that the so-called Feynman efficiency corresponds to that of an exoreversible engine, with γ = 1. Then, turning to the nonlinear regime, we generalize the approach based on the convection picture and introduce two different types of resistance to distinguish the dynamical behavior of the considered system from its ability to dissipate energy. We finally put forth the strong similarity between the original Feynman ratchet and a mesoscopic thermoelectric generator with a single conducting channel.
Feynman's and Ohta's Models of a Josephson Junction
ERIC Educational Resources Information Center
De Luca, R.
2012-01-01
The Josephson equations are derived by means of the weakly coupled two-level quantum system model given by Feynman. Adopting a simplified version of Ohta's model, starting from Feynman's model, the strict voltage-frequency Josephson relation is derived. The contribution of Ohta's approach to the comprehension of the additional term given by the…
Feynman propagators on static spacetimes
NASA Astrophysics Data System (ADS)
Dereziński, Jan; Siemssen, Daniel
We consider the Klein-Gordon equation on a static spacetime and minimally coupled to a static electromagnetic potential. We show that it is essentially self-adjoint on Cc∞. We discuss various distinguished inverses and bisolutions of the Klein-Gordon operator, focusing on the so-called Feynman propagator. We show that the Feynman propagator can be considered the boundary value of the resolvent of the Klein-Gordon operator, in the spirit of the limiting absorption principle known from the theory of Schrödinger operators. We also show that the Feynman propagator is the limit of the inverse of the Wick rotated Klein-Gordon operator.
Statistically Self-Consistent and Accurate Errors for SuperDARN Data
NASA Astrophysics Data System (ADS)
Reimer, A. S.; Hussey, G. C.; McWilliams, K. A.
2018-01-01
The Super Dual Auroral Radar Network (SuperDARN)-fitted data products (e.g., spectral width and velocity) are produced using weighted least squares fitting. We present a new First-Principles Fitting Methodology (FPFM) that utilizes the first-principles approach of Reimer et al. (2016) to estimate the variance of the real and imaginary components of the mean autocorrelation functions (ACFs) lags. SuperDARN ACFs fitted by the FPFM do not use ad hoc or empirical criteria. Currently, the weighting used to fit the ACF lags is derived from ad hoc estimates of the ACF lag variance. Additionally, an overcautious lag filtering criterion is used that sometimes discards data that contains useful information. In low signal-to-noise (SNR) and/or low signal-to-clutter regimes the ad hoc variance and empirical criterion lead to underestimated errors for the fitted parameter because the relative contributions of signal, noise, and clutter to the ACF variance is not taken into consideration. The FPFM variance expressions include contributions of signal, noise, and clutter. The clutter is estimated using the maximal power-based self-clutter estimator derived by Reimer and Hussey (2015). The FPFM was successfully implemented and tested using synthetic ACFs generated with the radar data simulator of Ribeiro, Ponomarenko, et al. (2013). The fitted parameters and the fitted-parameter errors produced by the FPFM are compared with the current SuperDARN fitting software, FITACF. Using self-consistent statistical analysis, the FPFM produces reliable or trustworthy quantitative measures of the errors of the fitted parameters. For an SNR in excess of 3 dB and velocity error below 100 m/s, the FPFM produces 52% more data points than FITACF.
Is coverage a factor in non-Gaussianity of IMF parameters?
NASA Technical Reports Server (NTRS)
Ahluwalia, H. S.; Fikani, M. M.
1995-01-01
Recently, Feynman and Ruzmaikin (1994) showed that IMF parameters for the 1973 to 1990 period are not log-normally distributed as previously suggested by Burlaga and King (1979) for the data obtained over a shorter time period (1963-75). They studied the first four moments, namely: mean, variance, skewness, and kurtosis. For a Gaussian distribution, moments higher than the variance should vanish. In particular, Feynman and Ruzmaikin obtained very high values of kurtosis during some periods of their analysis. We note that the coverage for IMF parameters is very uneven for the period analyzed by them, ranging from less than 40% to greater than 80%. So a question arises as to whether the amount of coverage is a factor in their analysis. We decided to test this for the B(sub z) component of IMF, since it is an effective geoactive parameter for short term disturbances. Like them, we used 1-hour averaged data available on the Omnitape. We studied the scatter plots of the annual mean values of B(sub z)(nT) and its kurtosis versus the percent coverage for the year. We obtain a correlation coefficient of 0.48 and 0.42 respectively for the 1973-90 period. The probability for a chance occurrence of these correlation coefficients for 18 pair of points is less than 8%. As a rough measure of skewness, we determined the percent asymmetry between the areas of the histograms representing the distributions of the positive and the negative values of B(sub z) and studied its correlation with the coverage for the year. This analysis yields a correlation coefficient of 0.41 When we extended the analysis for the whole period for which IMF data are available (1963-93) the corresponding correlation coefficients are 0.59, 0.14, and 0.42. Our findings will be presented and discussed
Spin wave Feynman diagram vertex computation package
NASA Astrophysics Data System (ADS)
Price, Alexander; Javernick, Philip; Datta, Trinanjan
Spin wave theory is a well-established theoretical technique that can correctly predict the physical behavior of ordered magnetic states. However, computing the effects of an interacting spin wave theory incorporating magnons involve a laborious by hand derivation of Feynman diagram vertices. The process is tedious and time consuming. Hence, to improve productivity and have another means to check the analytical calculations, we have devised a Feynman Diagram Vertex Computation package. In this talk, we will describe our research group's effort to implement a Mathematica based symbolic Feynman diagram vertex computation package that computes spin wave vertices. Utilizing the non-commutative algebra package NCAlgebra as an add-on to Mathematica, symbolic expressions for the Feynman diagram vertices of a Heisenberg quantum antiferromagnet are obtained. Our existing code reproduces the well-known expressions of a nearest neighbor square lattice Heisenberg model. We also discuss the case of a triangular lattice Heisenberg model where non collinear terms contribute to the vertex interactions.
NASA Astrophysics Data System (ADS)
Volkov, Sergey
2017-11-01
This paper presents a new method of numerical computation of the mass-independent QED contributions to the electron anomalous magnetic moment which arise from Feynman graphs without closed electron loops. The method is based on a forestlike subtraction formula that removes all ultraviolet and infrared divergences in each Feynman graph before integration in Feynman-parametric space. The integration is performed by an importance sampling Monte-Carlo algorithm with the probability density function that is constructed for each Feynman graph individually. The method is fully automated at any order of the perturbation series. The results of applying the method to 2-loop, 3-loop, 4-loop Feynman graphs, and to some individual 5-loop graphs are presented, as well as the comparison of this method with other ones with respect to Monte Carlo convergence speed.
Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.
DeCarlo, Lawrence T
2003-02-01
The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.
NASA Astrophysics Data System (ADS)
Preskill, John
2008-12-01
In the popular imagination, the iconic American theoretical physicist is Richard Feynman, in all his safe-cracking, bongo-thumping, woman-chasing glory. I suspect that many physicists, if asked to name a living colleague who best captures the spirit of Feynman, would give the same answer as me: Leonard Susskind. As far as I know, Susskind does not crack safes, thump bongos, or chase women, yet he shares Feynman's brash cockiness (which in Susskind's case is leavened by occasional redeeming flashes of self-deprecation) and Feynman's gift for spinning fascinating anecdotes. If you are having a group of physicists over for dinner and want to be sure to have a good time, invite Susskind.
NASA Technical Reports Server (NTRS)
Steiner, E.
1973-01-01
The use of the electrostatic Hellmann-Feynman theorem for the calculation of the leading term in the 1/R expansion of the force of interaction between two well-separated hydrogen atoms is discussed. Previous work has suggested that whereas this term is determined wholly by the first-order wavefunction when calculated by perturbation theory, the use of the Hellmann-Feynman theorem apparently requires the wavefunction through second order. It is shown how the two results may be reconciled and that the Hellmann-Feynman theorem may be reformulated in such a way that only the first-order wavefunction is required.
Counting the number of Feynman graphs in QCD
NASA Astrophysics Data System (ADS)
Kaneko, T.
2018-05-01
Information about the number of Feynman graphs for a given physical process in a given field theory is especially useful for confirming the result of a Feynman graph generator used in an automatic system of perturbative calculations. A method of counting the number of Feynman graphs with weight of symmetry factor was established based on zero-dimensional field theory, and was used in scalar theories and QED. In this article this method is generalized to more complicated models by direct calculation of generating functions on a computer algebra system. This method is applied to QCD with and without counter terms, where many higher order are being calculated automatically.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volkov, S. A., E-mail: volkoff-sergey@mail.ru
2016-06-15
A new subtractive procedure for canceling ultraviolet and infrared divergences in the Feynman integrals described here is developed for calculating QED corrections to the electron anomalous magnetic moment. The procedure formulated in the form of a forest expression with linear operators applied to Feynman amplitudes of UV-diverging subgraphs makes it possible to represent the contribution of each Feynman graph containing only electron and photon propagators in the form of a converging integral with respect to Feynman parameters. The application of the developed method for numerical calculation of two- and threeloop contributions is described.
Thinking in Pictures: John Wheeler, Richard Feynman and the Diagrammatic Approach to Problem Solving
NASA Astrophysics Data System (ADS)
Halpern, Paul
While classical mechanics readily lends itself to sketches, many fields of modern physics, particularly quantum mechanics, quantum field theory, and general relativity, are notoriously hard to envision. Nevertheless, John Wheeler and Richard Feynman, who obtained his PhD under Wheeler, each insisted that diagrams were the most effective way to tackle modern physics questions as well. Beginning with Wheeler and Feynman's work together at Princeton, I'll show how the two influenced each other and encouraged each other's diagrammatic methods. I'll explore the influence on Feynman of not just Wheeler, but also of his first wife Arline, an aspiring artist. I'll describe how Feynman diagrams, introduced in the late 1940s, while first seen as `heretical' in the face of Bohr's complementarity, became standard, essential methods. I'll detail Wheeler's encouragement of his colleague Martin Kruskal's use of special diagrams to elucidate the properties of black holes. Finally, I'll show how each physicist supported art later in life: Wheeler helping to arrange the Putnam Collection of 20th century sculpture at Princeton and Feynman, in a kind of `second career,' becoming an artist himself.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Jae Gil, E-mail: jgchoi@dankook.ac.kr; Chang, Seung Jun, E-mail: sejchang@dankook.ac.kr
In this paper we derive a Cameron-Storvick theorem for the analytic Feynman integral of functionals on product abstract Wiener space B{sup 2}. We then apply our result to obtain an evaluation formula for the analytic Feynman integral of unbounded functionals on B{sup 2}. We also present meaningful examples involving functionals which arise naturally in quantum mechanics.
Let’s have a coffee with the Standard Model of particle physics!
NASA Astrophysics Data System (ADS)
Woithe, Julia; Wiener, Gerfried J.; Van der Veken, Frederik F.
2017-05-01
The Standard Model of particle physics is one of the most successful theories in physics and describes the fundamental interactions between elementary particles. It is encoded in a compact description, the so-called ‘Lagrangian’, which even fits on t-shirts and coffee mugs. This mathematical formulation, however, is complex and only rarely makes it into the physics classroom. Therefore, to support high school teachers in their challenging endeavour of introducing particle physics in the classroom, we provide a qualitative explanation of the terms of the Lagrangian and discuss their interpretation based on associated Feynman diagrams.
On the Path Integral in Non-Commutative (nc) Qft
NASA Astrophysics Data System (ADS)
Dehne, Christoph
2008-09-01
As is generally known, different quantization schemes applied to field theory on NC spacetime lead to Feynman rules with different physical properties, if time does not commute with space. In particular, the Feynman rules that are derived from the path integral corresponding to the T*-product (the so-called naïve Feynman rules) violate the causal time ordering property. Within the Hamiltonian approach to quantum field theory, we show that we can (formally) modify the time ordering encoded in the above path integral. The resulting Feynman rules are identical to those obtained in the canonical approach via the Gell-Mann-Low formula (with T-ordering). They preserve thus unitarity and causal time ordering.
Correlates of physical fitness and activity in Taiwanese children.
Chen, J-L; Unnithan, V; Kennedy, C; Yeh, C-H
2008-03-01
This cross-sectional study examined factors related to children's physical fitness and activity levels in Taiwan. A total of 331 Taiwanese children, aged 7 and 8, and their mothers participated in the study. Children performed physical fitness tests, recorded their physical activities during two weekdays and completed self-esteem questionnaires. Research assistants measured the children's body mass and stature. Mothers completed demographic, parenting style and physical activity questionnaires. Attending urban school, lower body mass index (BMI), older age and better muscular endurance contributed to the variance in better aerobic capacity, and attending rural school and better aerobic capacity contributed to the variance in better muscular endurance in boys. Attending urban school, lower BMI and better athletic competence contributed to the variance in better aerobic capacity, and younger age, rural school and higher household income contributed to the variance in better flexibility in girls. Despite the limitations of the study, with many countries and regions, including Taiwan, now emphasizing the importance of improving physical fitness and activity in children, an intervention that is gender-, geographically, and developmentally appropriate can improve the likelihood of successful physical fitness and activity programmes.
NASA Astrophysics Data System (ADS)
Fan, Hong-yi; Xu, Xue-xiang
2009-06-01
By virtue of the generalized Hellmann-Feynman theorem [H. Y. Fan and B. Z. Chen, Phys. Lett. A 203, 95 (1995)], we derive the mean energy of some interacting bosonic systems for some Hamiltonian models without proceeding with diagonalizing the Hamiltonians. Our work extends the field of applications of the Hellmann-Feynman theorem and may enrich the theory of quantum statistics.
The ε-form of the differential equations for Feynman integrals in the elliptic case
NASA Astrophysics Data System (ADS)
Adams, Luise; Weinzierl, Stefan
2018-06-01
Feynman integrals are easily solved if their system of differential equations is in ε-form. In this letter we show by the explicit example of the kite integral family that an ε-form can even be achieved, if the Feynman integrals do not evaluate to multiple polylogarithms. The ε-form is obtained by a (non-algebraic) change of basis for the master integrals.
Simplifying Differential Equations for Multiscale Feynman Integrals beyond Multiple Polylogarithms.
Adams, Luise; Chaubey, Ekta; Weinzierl, Stefan
2017-04-07
In this Letter we exploit factorization properties of Picard-Fuchs operators to decouple differential equations for multiscale Feynman integrals. The algorithm reduces the differential equations to blocks of the size of the order of the irreducible factors of the Picard-Fuchs operator. As a side product, our method can be used to easily convert the differential equations for Feynman integrals which evaluate to multiple polylogarithms to an ϵ form.
NASA Technical Reports Server (NTRS)
Alston, D. W.
1981-01-01
The considered research had the objective to design a statistical model that could perform an error analysis of curve fits of wind tunnel test data using analysis of variance and regression analysis techniques. Four related subproblems were defined, and by solving each of these a solution to the general research problem was obtained. The capabilities of the evolved true statistical model are considered. The least squares fit is used to determine the nature of the force, moment, and pressure data. The order of the curve fit is increased in order to delete the quadratic effect in the residuals. The analysis of variance is used to determine the magnitude and effect of the error factor associated with the experimental data.
Merecz, Dorota; Andysz, Aleksandra
2014-01-01
The aim of the presented research was to explore the links between complementary and supplementary dimensions of Person-Organization fit (P-O fit), organizational identification (OI) and negative (WHI(-)) versus positive (WHI(+)) work-home interactions. It was assumed that both complementary and supplementary P-O fit and OI were positively related to WHI(+) and negatively to WHI(-). The study was conducted on a large sample of Polish blue and white collar workers. The subjects were interviewed by means of questionnaires measuring: supplementary and complementary dimensions of P-O fit, OI and WHI. General work ability and demographic variables were also controlled in the study, and statistical analysis of ANOVA, pairwise comparison as well as regression were performed. P-O fit and OI differentiated the subjects in terms of WHI. For women supplementary fit was a significant predictor of WHI(-) and explained 12% of its variance, for men it was complementary fit with the number of working days per week and the level of education, which explained 22% of variance. Supplementary fit and OI explained 16% of WHI(+) variance in women; OI, tenure at the main place of employment and the level of education explained 8% of WHI(+) variance in men. It has been proven that not only are the effects of P-O fit and OI limited to the work environment but they also permeate boundaries between work and home and influence private life - good level of P-O fit and good OI play facilitating role in the positive spillover between work and home. Gender differences in the significance and predictive values of P-O fit and OI for WHI were also found. The innovative aspect of the work is the inclusion of P-O fit and OI in the range of significant predictors of work-home interaction. The results can serve as rationale for employers that improvement of P-O fit and employees' organizational identification should be included in work-life balance programs.
Correlates of physical fitness and activity in Taiwanese children
Chen, J.-L.; Unnithan, V.; Kennedy, C.; Yeh, C.-H.
2010-01-01
Aim This cross-sectional study examined factors related to children’s physical fitness and activity levels in Taiwan. Methods A total of 331 Taiwanese children, aged 7 and 8, and their mothers participated in the study. Children performed physical fitness tests, recorded their physical activities during two weekdays and completed self-esteem questionnaires. Research assistants measured the children’s body mass and stature. Mothers completed demographic, parenting style and physical activity questionnaires. Results Attending urban school, lower body mass index (BMI), older age and better muscular endurance contributed to the variance in better aerobic capacity, and attending rural school and better aerobic capacity contributed to the variance in better muscular endurance in boys. Attending urban school, lower BMI and better athletic competence contributed to the variance in better aerobic capacity, and younger age, rural school and higher household income contributed to the variance in better flexibility in girls. Conclusion Despite the limitations of the study, with many countries and regions, including Taiwan, now emphasizing the importance of improving physical fitness and activity in children, an intervention that is gender-, geographically, and developmentally appropriate can improve the likelihood of successful physical fitness and activity programmes. PMID:18275540
Sztepanacz, Jacqueline L; Rundle, Howard D
2012-10-01
Directional selection is prevalent in nature, yet phenotypes tend to remain relatively constant, suggesting a limit to trait evolution. However, the genetic basis of this limit is unresolved. Given widespread pleiotropy, opposing selection on a trait may arise from the effects of the underlying alleles on other traits under selection, generating net stabilizing selection on trait genetic variance. These pleiotropic costs of trait exaggeration may arise through any number of other traits, making them hard to detect in phenotypic analyses. Stabilizing selection can be inferred, however, if genetic variance is greater among low- compared to high-fitness individuals. We extend a recently suggested approach to provide a direct test of a difference in genetic variance for a suite of cuticular hydrocarbons (CHCs) in Drosophila serrata. Despite strong directional sexual selection on these traits, genetic variance differed between high- and low-fitness individuals and was greater among the low-fitness males for seven of eight CHCs, significantly more than expected by chance. Univariate tests of a difference in genetic variance were nonsignificant but likely have low power. Our results suggest that further CHC exaggeration in D. serrata in response to sexual selection is limited by pleiotropic costs mediated through other traits. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.
NASA Astrophysics Data System (ADS)
Castro, E.
2018-02-01
From the perturbative expansion of the exact Green function, an exact counting formula is derived to determine the number of different types of connected Feynman diagrams. This formula coincides with the Arquès-Walsh sequence formula in the rooted map theory, supporting the topological connection between Feynman diagrams and rooted maps. A classificatory summing-terms approach is used, in connection to discrete mathematical theory.
Thorlund, Kristian; Thabane, Lehana; Mills, Edward J
2013-01-11
Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the 'common variance' assumption). This approach 'borrows strength' for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice.
Predicting the evolution of sex on complex fitness landscapes.
Misevic, Dusan; Kouyos, Roger D; Bonhoeffer, Sebastian
2009-09-01
Most population genetic theories on the evolution of sex or recombination are based on fairly restrictive assumptions about the nature of the underlying fitness landscapes. Here we use computer simulations to study the evolution of sex on fitness landscapes with different degrees of complexity and epistasis. We evaluate predictors of the evolution of sex, which are derived from the conditions established in the population genetic literature for the evolution of sex on simpler fitness landscapes. These predictors are based on quantities such as the variance of Hamming distance, mean fitness, additive genetic variance, and epistasis. We show that for complex fitness landscapes all the predictors generally perform poorly. Interestingly, while the simplest predictor, Delta Var(HD), also suffers from a lack of accuracy, it turns out to be the most robust across different types of fitness landscapes. Delta Var(HD) is based on the change in Hamming distance variance induced by recombination and thus does not require individual fitness measurements. The presence of loci that are not under selection can, however, severely diminish predictor accuracy. Our study thus highlights the difficulty of establishing reliable criteria for the evolution of sex on complex fitness landscapes and illustrates the challenge for both theoretical and experimental research on the origin and maintenance of sexual reproduction.
Predicting the Evolution of Sex on Complex Fitness Landscapes
Misevic, Dusan; Kouyos, Roger D.; Bonhoeffer, Sebastian
2009-01-01
Most population genetic theories on the evolution of sex or recombination are based on fairly restrictive assumptions about the nature of the underlying fitness landscapes. Here we use computer simulations to study the evolution of sex on fitness landscapes with different degrees of complexity and epistasis. We evaluate predictors of the evolution of sex, which are derived from the conditions established in the population genetic literature for the evolution of sex on simpler fitness landscapes. These predictors are based on quantities such as the variance of Hamming distance, mean fitness, additive genetic variance, and epistasis. We show that for complex fitness landscapes all the predictors generally perform poorly. Interestingly, while the simplest predictor, ΔVarHD, also suffers from a lack of accuracy, it turns out to be the most robust across different types of fitness landscapes. ΔVarHD is based on the change in Hamming distance variance induced by recombination and thus does not require individual fitness measurements. The presence of loci that are not under selection can, however, severely diminish predictor accuracy. Our study thus highlights the difficulty of establishing reliable criteria for the evolution of sex on complex fitness landscapes and illustrates the challenge for both theoretical and experimental research on the origin and maintenance of sexual reproduction. PMID:19763171
A Staged Reading of the Play: Moving Bodies
NASA Astrophysics Data System (ADS)
Schwartz, Brian
Moving Bodies is about Nobel Prize-winning physicist Richard Feynman as he explores nature, science, sex, anti-Semitism, and the world around him. This epic, comic journey portrays Feynman as an iconoclastic young man, a physicist with the Manhattan Project and confronting the mystery of the Challenger disaster. The Atomic Bomb is central to the play, but it is also very much about human loves and losses. We learn about his (Feynman's) eccentricities: his bongo playing, his penchant for picking locks, and most notably his appreciation for women. Through playwright Arthur Giron's eyes, we see how Feynman became one of the most important scientists of our time. The playwright, Arthur Giron, is the co-playwright of the recent 2015 Broadway Musical, Amazing Grace. The staged reading is performed by the Southern Rep Theatre. http://www.southernrep.com/ The play director and actors as well as a historian-scientist who knew Feynman will be available for a talk-back discussion after the play reading. Produced by Brian Schwartz, CUNY and Gregory Mack, APS. Sponsored by: The Forum on the History of Physics, The Forum on Outreach and Engaging the Public and The Forum on Physics and Society.
Feynman rules for a whole Abelian model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chauca, J.; Doria, R.; Soares, W.
2012-09-24
Feynman rules for an abelian extension of gauge theories are discussed and explicitly derived. Vertices with three and four abelian gauge bosons are obtained. A discussion on an eventual structure for the photon is presented.
Ivy, T M
2007-03-01
Genetic benefits can enhance the fitness of polyandrous females through the high intrinsic genetic quality of females' mates or through the interaction between female and male genes. I used a full diallel cross, a quantitative genetics design that involves all possible crosses among a set of genetically homogeneous lines, to determine the mechanism through which polyandrous female decorated crickets (Gryllodes sigillatus) obtain genetic benefits. I measured several traits related to fitness and partitioned the phenotypic variance into components representing the contribution of additive genetic variance ('good genes'), nonadditive genetic variance (genetic compatibility), as well as maternal and paternal effects. The results reveal a significant variance attributable to both nonadditive and additive sources in the measured traits, and their influence depended on which trait was considered. The lack of congruence in sources of phenotypic variance among these fitness-related traits suggests that the evolution and maintenance of polyandry are unlikely to have resulted from one selective influence, but rather are the result of the collective effects of a number of factors.
Particles, Feynman Diagrams and All That
ERIC Educational Resources Information Center
Daniel, Michael
2006-01-01
Quantum fields are introduced in order to give students an accurate qualitative understanding of the origin of Feynman diagrams as representations of particle interactions. Elementary diagrams are combined to produce diagrams representing the main features of the Standard Model.
Richard P. Feynman and the Feynman Diagrams
available in full-text and on the Web. Documents: A Theorem and Its Application to Finite Tampers, DOE Fermi-Thomas Theory; DOE Technical Report, April 28, 1947 Mathematical Formulation of the Quantum Theory
Probing finite coarse-grained virtual Feynman histories with sequential weak values
NASA Astrophysics Data System (ADS)
Georgiev, Danko; Cohen, Eliahu
2018-05-01
Feynman's sum-over-histories formulation of quantum mechanics has been considered a useful calculational tool in which virtual Feynman histories entering into a coherent quantum superposition cannot be individually measured. Here we show that sequential weak values, inferred by consecutive weak measurements of projectors, allow direct experimental probing of individual virtual Feynman histories, thereby revealing the exact nature of quantum interference of coherently superposed histories. Because the total sum of sequential weak values of multitime projection operators for a complete set of orthogonal quantum histories is unity, complete sets of weak values could be interpreted in agreement with the standard quantum mechanical picture. We also elucidate the relationship between sequential weak values of quantum histories with different coarse graining in time and establish the incompatibility of weak values for nonorthogonal quantum histories in history Hilbert space. Bridging theory and experiment, the presented results may enhance our understanding of both weak values and quantum histories.
Feynman-Kac formula for stochastic hybrid systems.
Bressloff, Paul C
2017-01-01
We derive a Feynman-Kac formula for functionals of a stochastic hybrid system evolving according to a piecewise deterministic Markov process. We first derive a stochastic Liouville equation for the moment generator of the stochastic functional, given a particular realization of the underlying discrete Markov process; the latter generates transitions between different dynamical equations for the continuous process. We then analyze the stochastic Liouville equation using methods recently developed for diffusion processes in randomly switching environments. In particular, we obtain dynamical equations for the moment generating function, averaged with respect to realizations of the discrete Markov process. The resulting Feynman-Kac formula takes the form of a differential Chapman-Kolmogorov equation. We illustrate the theory by calculating the occupation time for a one-dimensional velocity jump process on the infinite or semi-infinite real line. Finally, we present an alternative derivation of the Feynman-Kac formula based on a recent path-integral formulation of stochastic hybrid systems.
Clocks in Feynman's computer and Kitaev's local Hamiltonian: Bias, gaps, idling, and pulse tuning
NASA Astrophysics Data System (ADS)
Caha, Libor; Landau, Zeph; Nagaj, Daniel
2018-06-01
We present a collection of results about the clock in Feynman's computer construction and Kitaev's local Hamiltonian problem. First, by analyzing the spectra of quantum walks on a line with varying end-point terms, we find a better lower bound on the gap of the Feynman Hamiltonian, which translates into a less strict promise gap requirement for the quantum-Merlin-Arthur-complete local Hamiltonian problem. We also translate this result into the language of adiabatic quantum computation. Second, introducing an idling clock construction with a large state space but fast Cesaro mixing, we provide a way for achieving an arbitrarily high success probability of computation with Feynman's computer with only a logarithmic increase in the number of clock qubits. Finally, we tune and thus improve the costs (locality and gap scaling) of implementing a (pulse) clock with a single excitation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perkins, R. J., E-mail: rperkins@pppl.gov; Bellan, P. M.
Action integrals are often used to average a system over fast oscillations and obtain reduced dynamics. It is not surprising, then, that action integrals play a central role in the Hellmann-Feynman theorem of classical mechanics, which furnishes the values of certain quantities averaged over one period of rapid oscillation. This paper revisits the classical Hellmann-Feynman theorem, rederiving it in connection to an analogous theorem involving the time-averaged evolution of canonical coordinates. We then apply a modified version of the Hellmann-Feynman theorem to obtain a new result: the magnetic flux enclosed by one period of gyro-motion of a charged particle inmore » a non-uniform magnetic field. These results further demonstrate the utility of the action integral in regards to obtaining orbit-averaged quantities and the usefulness of this formalism in characterizing charged particle motion.« less
Taylor, Richard J; Sanders, Dajo; Myers, Tony; Abt, Grant; Taylor, Celia A; Akubat, Ibrahim
2018-02-01
To identify the dose-response relationship between measures of training load (TL) and changes in aerobic fitness in academy rugby union players. Training data from 10 academy rugby union players were collected during a 6-wk in-season period. Participants completed a lactate-threshold test that was used to assess VO 2 max, velocity at VO 2 max, velocity at 2 mmol/L (lactate threshold), and velocity at 4 mmol/L (onset of lactate accumulation; vOBLA) as measures of aerobic fitness. Internal-TL measures calculated were Banister training impulse (bTRIMP), Edwards TRIMP, Lucia TRIMP, individualized TRIMP (iTRIMP), and session RPE (sRPE). External-TL measures calculated were total distance, PlayerLoad™, high-speed distance >15 km/h, very-high-speed distance >18 km/h, and individualized high-speed distance based on each player's vOBLA. A second-order-regression (quadratic) analysis found that bTRIMP (R 2 = .78, P = .005) explained 78% of the variance and iTRIMP (R 2 = .55, P = .063) explained 55% of the variance in changes in VO 2 max. All other HR-based internal-TL measures and sRPE explained less than 40% of variance with fitness changes. External TL explained less than 42% of variance with fitness changes. In rugby players, bTRIMP and iTRIMP display a curvilinear dose-response relationship with changes in maximal aerobic fitness.
2013-01-01
Background Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the ‘common variance’ assumption). This approach ‘borrows strength’ for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. Methods In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. Results In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. Conclusions MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice. PMID:23311298
Technical and biological variance structure in mRNA-Seq data: life in the real world
2012-01-01
Background mRNA expression data from next generation sequencing platforms is obtained in the form of counts per gene or exon. Counts have classically been assumed to follow a Poisson distribution in which the variance is equal to the mean. The Negative Binomial distribution which allows for over-dispersion, i.e., for the variance to be greater than the mean, is commonly used to model count data as well. Results In mRNA-Seq data from 25 subjects, we found technical variation to generally follow a Poisson distribution as has been reported previously and biological variability was over-dispersed relative to the Poisson model. The mean-variance relationship across all genes was quadratic, in keeping with a Negative Binomial (NB) distribution. Over-dispersed Poisson and NB distributional assumptions demonstrated marked improvements in goodness-of-fit (GOF) over the standard Poisson model assumptions, but with evidence of over-fitting in some genes. Modeling of experimental effects improved GOF for high variance genes but increased the over-fitting problem. Conclusions These conclusions will guide development of analytical strategies for accurate modeling of variance structure in these data and sample size determination which in turn will aid in the identification of true biological signals that inform our understanding of biological systems. PMID:22769017
A global solution to the Schrödinger equation: From Henstock to Feynman
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nathanson, Ekaterina S., E-mail: enathanson@ggc.edu; Jørgensen, Palle E. T., E-mail: palle-jorgensen@uiowa.edu
2015-09-15
One of the key elements of Feynman’s formulation of non-relativistic quantum mechanics is a so-called Feynman path integral. It plays an important role in the theory, but it appears as a postulate based on intuition, rather than a well-defined object. All previous attempts to supply Feynman’s theory with rigorous mathematics underpinning, based on the physical requirements, have not been satisfactory. The difficulty comes from the need to define a measure on the infinite dimensional space of paths and to create an integral that would possess all of the properties requested by Feynman. In the present paper, we consider a newmore » approach to defining the Feynman path integral, based on the theory developed by Muldowney [A Modern Theory of Random Variable: With Applications in Stochastic Calcolus, Financial Mathematics, and Feynman Integration (John Wiley & Sons, Inc., New Jersey, 2012)]. Muldowney uses the Henstock integration technique and deals with non-absolute integrability of the Fresnel integrals, in order to obtain a representation of the Feynman path integral as a functional. This approach offers a mathematically rigorous definition supporting Feynman’s intuitive derivations. But in his work, Muldowney gives only local in space-time solutions. A physical solution to the non-relativistic Schrödinger equation must be global, and it must be given in the form of a unitary one-parameter group in L{sup 2}(ℝ{sup n}). The purpose of this paper is to show that a system of one-dimensional local Muldowney’s solutions may be extended to yield a global solution. Moreover, the global extension can be represented by a unitary one-parameter group acting in L{sup 2}(ℝ{sup n})« less
Moghaddar, N; van der Werf, J H J
2017-12-01
The objectives of this study were to estimate the additive and dominance variance component of several weight and ultrasound scanned body composition traits in purebred and combined cross-bred sheep populations based on single nucleotide polymorphism (SNP) marker genotypes and then to investigate the effect of fitting additive and dominance effects on accuracy of genomic evaluation. Additive and dominance variance components were estimated in a mixed model equation based on "average information restricted maximum likelihood" using additive and dominance (co)variances between animals calculated from 48,599 SNP marker genotypes. Genomic prediction was based on genomic best linear unbiased prediction (GBLUP), and the accuracy of prediction was assessed based on a random 10-fold cross-validation. Across different weight and scanned body composition traits, dominance variance ranged from 0.0% to 7.3% of the phenotypic variance in the purebred population and from 7.1% to 19.2% in the combined cross-bred population. In the combined cross-bred population, the range of dominance variance decreased to 3.1% and 9.9% after accounting for heterosis effects. Accounting for dominance effects significantly improved the likelihood of the fitting model in the combined cross-bred population. This study showed a substantial dominance genetic variance for weight and ultrasound scanned body composition traits particularly in cross-bred population; however, improvement in the accuracy of genomic breeding values was small and statistically not significant. Dominance variance estimates in combined cross-bred population could be overestimated if heterosis is not fitted in the model. © 2017 Blackwell Verlag GmbH.
Sex-specific selection under environmental stress in seed beetles.
Martinossi-Allibert, I; Arnqvist, G; Berger, D
2017-01-01
Sexual selection can increase rates of adaptation by imposing strong selection in males, thereby allowing efficient purging of the mutation load on population fitness at a low demographic cost. Indeed, sexual selection tends to be male-biased throughout the animal kingdom, but little empirical work has explored the ecological sensitivity of this sex difference. In this study, we generated theoretical predictions of sex-specific strengths of selection, environmental sensitivities and genotype-by-environment interactions and tested them in seed beetles by manipulating either larval host plant or rearing temperature. Using fourteen isofemale lines, we measured sex-specific reductions in fitness components, genotype-by-environment interactions and the strength of selection (variance in fitness) in the juvenile and adult stage. As predicted, variance in fitness increased with stress, was consistently greater in males than females for adult reproductive success (implying strong sexual selection), but was similar in the sexes in terms of juvenile survival across all levels of stress. Although genetic variance in fitness increased in magnitude under severe stress, heritability decreased and particularly so in males. Moreover, genotype-by-environment interactions for fitness were common but specific to the type of stress, sex and life stage, suggesting that new environments may change the relative alignment and strength of selection in males and females. Our study thus exemplifies how environmental stress can influence the relative forces of natural and sexual selection, as well as concomitant changes in genetic variance in fitness, which are predicted to have consequences for rates of adaptation in sexual populations. © 2016 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2016 European Society For Evolutionary Biology.
Search Site submit About Us Los Alamos National LaboratoryRichard P. Feynman Center for Innovation Innovation protecting tomorrow Los Alamos National Laboratory The Richard P. Feynman Center for Innovation . thumbnail of Energy and Subsurface Laura Barber, Business Development Laura Barber Energy: Los Alamos is
Search Site submit About Us Los Alamos National LaboratoryRichard P. Feynman Center for Innovation Innovation protecting tomorrow Los Alamos National Laboratory The Richard P. Feynman Center for Innovation key programs to achieve regional technology commercialization from Los Alamos. The programs below help
Huygens-Feynman-Fresnel principle as the basis of applied optics.
Gitin, Andrey V
2013-11-01
The main relationships of wave optics are derived from a combination of the Huygens-Fresnel principle and the Feynman integral over all paths. The stationary-phase approximation of the wave relations gives the correspondent relations from the point of view of geometrical optics.
Saviane, Chiara; Silver, R Angus
2006-06-15
Synapses play a crucial role in information processing in the brain. Amplitude fluctuations of synaptic responses can be used to extract information about the mechanisms underlying synaptic transmission and its modulation. In particular, multiple-probability fluctuation analysis can be used to estimate the number of functional release sites, the mean probability of release and the amplitude of the mean quantal response from fits of the relationship between the variance and mean amplitude of postsynaptic responses, recorded at different probabilities. To determine these quantal parameters, calculate their uncertainties and the goodness-of-fit of the model, it is important to weight the contribution of each data point in the fitting procedure. We therefore investigated the errors associated with measuring the variance by determining the best estimators of the variance of the variance and have used simulations of synaptic transmission to test their accuracy and reliability under different experimental conditions. For central synapses, which generally have a low number of release sites, the amplitude distribution of synaptic responses is not normal, thus the use of a theoretical variance of the variance based on the normal assumption is not a good approximation. However, appropriate estimators can be derived for the population and for limited sample sizes using a more general expression that involves higher moments and introducing unbiased estimators based on the h-statistics. Our results are likely to be relevant for various applications of fluctuation analysis when few channels or release sites are present.
The signed permutation group on Feynman graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Purkart, Julian, E-mail: purkart@physik.hu-berlin.de
2016-08-15
The Feynman rules assign to every graph an integral which can be written as a function of a scaling parameter L. Assuming L for the process under consideration is very small, so that contributions to the renormalization group are small, we can expand the integral and only consider the lowest orders in the scaling. The aim of this article is to determine specific combinations of graphs in a scalar quantum field theory that lead to a remarkable simplification of the first non-trivial term in the perturbation series. It will be seen that the result is independent of the renormalization schememore » and the scattering angles. To achieve that goal we will utilize the parametric representation of scalar Feynman integrals as well as the Hopf algebraic structure of the Feynman graphs under consideration. Moreover, we will present a formula which reduces the effort of determining the first-order term in the perturbation series for the specific combination of graphs to a minimum.« less
Navigating around the algebraic jungle of QCD: efficient evaluation of loop helicity amplitudes
NASA Astrophysics Data System (ADS)
Lam, C. S.
1993-05-01
A method is developed whereby spinor helicity techniques can be used to simlify the calculation of loop amplitudes. This is achieved by using the Feynman-parameter representation where the offending off-shell loop momenta do not appear. Other shortcuts motivated by the Bern-Kosower one-loop string calculations can be incorporated into the formalism. This includes color reorganization into Chan-Paton factors and the use of background Feynman gauge. This method is applicable to any Feynman diagram with any number of loops as long as the external masses can be ignored. In order to minimize the very considerable algebra encountered in non-abelian gauge theories, graphical methods are developed for most of the calculations. This enables the large number of terms encountered to be organized implicitly in the Feynman diagram without the necessity of writing down any of them algebraically. A one-loop four-gluon amplitude in a particular helicity configuration is computed explicitly to illustrate the method.
Eaton, Jeffrey W.; Bao, Le
2017-01-01
Objectives The aim of the study was to propose and demonstrate an approach to allow additional nonsampling uncertainty about HIV prevalence measured at antenatal clinic sentinel surveillance (ANC-SS) in model-based inferences about trends in HIV incidence and prevalence. Design Mathematical model fitted to surveillance data with Bayesian inference. Methods We introduce a variance inflation parameter σinfl2 that accounts for the uncertainty of nonsampling errors in ANC-SS prevalence. It is additive to the sampling error variance. Three approaches are tested for estimating σinfl2 using ANC-SS and household survey data from 40 subnational regions in nine countries in sub-Saharan, as defined in UNAIDS 2016 estimates. Methods were compared using in-sample fit and out-of-sample prediction of ANC-SS data, fit to household survey prevalence data, and the computational implications. Results Introducing the additional variance parameter σinfl2 increased the error variance around ANC-SS prevalence observations by a median of 2.7 times (interquartile range 1.9–3.8). Using only sampling error in ANC-SS prevalence ( σinfl2=0), coverage of 95% prediction intervals was 69% in out-of-sample prediction tests. This increased to 90% after introducing the additional variance parameter σinfl2. The revised probabilistic model improved model fit to household survey prevalence and increased epidemic uncertainty intervals most during the early epidemic period before 2005. Estimating σinfl2 did not increase the computational cost of model fitting. Conclusions: We recommend estimating nonsampling error in ANC-SS as an additional parameter in Bayesian inference using the Estimation and Projection Package model. This approach may prove useful for incorporating other data sources such as routine prevalence from Prevention of mother-to-child transmission testing into future epidemic estimates. PMID:28296801
Quantum walks in brain microtubules--a biomolecular basis for quantum cognition?
Hameroff, Stuart
2014-01-01
Cognitive decisions are best described by quantum mathematics. Do quantum information devices operate in the brain? What would they look like? Fuss and Navarro () describe quantum lattice registers in which quantum superpositioned pathways interact (compute/integrate) as 'quantum walks' akin to Feynman's path integral in a lattice (e.g. the 'Feynman quantum chessboard'). Simultaneous alternate pathways eventually reduce (collapse), selecting one particular pathway in a cognitive decision, or choice. This paper describes how quantum walks in a Feynman chessboard are conceptually identical to 'topological qubits' in brain neuronal microtubules, as described in the Penrose-Hameroff 'Orch OR' theory of consciousness. Copyright © 2013 Cognitive Science Society, Inc.
Application of the Feynman-tree theorem together with BCFW recursion relations
NASA Astrophysics Data System (ADS)
Maniatis, M.
2018-03-01
Recently, it has been shown that on-shell scattering amplitudes can be constructed by the Feynman-tree theorem combined with the BCFW recursion relations. Since the BCFW relations are restricted to tree diagrams, the preceding application of the Feynman-tree theorem is essential. In this way, amplitudes can be constructed by on-shell and gauge-invariant tree amplitudes. Here, we want to apply this method to the electron-photon vertex correction. We present all the single, double, and triple phase-space tensor integrals explicitly and show that the sum of amplitudes coincides with the result of the conventional calculation of a virtual loop correction.
Exact Maximum-Entropy Estimation with Feynman Diagrams
NASA Astrophysics Data System (ADS)
Netser Zernik, Amitai; Schlank, Tomer M.; Tessler, Ran J.
2018-02-01
A longstanding open problem in statistics is finding an explicit expression for the probability measure which maximizes entropy with respect to given constraints. In this paper a solution to this problem is found, using perturbative Feynman calculus. The explicit expression is given as a sum over weighted trees.
What Physical Fitness Component Is Most Closely Associated With Adolescents' Blood Pressure?
Nunes, Heloyse E G; Alves, Carlos A S; Gonçalves, Eliane C A; Silva, Diego A S
2017-12-01
This study aimed to determine which of four selected physical fitness variables, would be most associated with blood pressure changes (systolic and diastolic) in a large sample of adolescents. This was a descriptive and cross-sectional, epidemiological study of 1,117 adolescents aged 14-19 years from southern Brazil. Systolic and diastolic blood pressure were measured by a digital pressure device, and the selected physical fitness variables were body composition (body mass index), flexibility (sit-and-reach test), muscle strength/resistance (manual dynamometer), and aerobic fitness (Modified Canadian Aerobic Fitness Test). Simple and multiple linear regression analyses revealed that aerobic fitness and muscle strength/resistance best explained variations in systolic blood pressure for boys (17.3% and 7.4% of variance) and girls (7.4% of variance). Aerobic fitness, body composition, and muscle strength/resistance are all important indicators of blood pressure control, but aerobic fitness was a stronger predictor of systolic blood pressure in boys and of diastolic blood pressure in both sexes.
On the Least-Squares Fitting of Correlated Data: a Priorivs a PosterioriWeighting
NASA Astrophysics Data System (ADS)
Tellinghuisen, Joel
1996-10-01
One of the methods in common use for analyzing large data sets is a two-step procedure, in which subsets of the full data are first least-squares fitted to a preliminary set of parameters, and the latter are subsequently merged to yield the final parameters. The second step of this procedure is properly a correlated least-squares fit and requires the variance-covariance matrices from the first step to construct the weight matrix for the merge. There is, however, an ambiguity concerning the manner in which the first-step variance-covariance matrices are assessed, which leads to different statistical properties for the quantities determined in the merge. The issue is one ofa priorivsa posterioriassessment of weights, which is an application of what was originally calledinternalvsexternal consistencyby Birge [Phys. Rev.40,207-227 (1932)] and Deming ("Statistical Adjustment of Data." Dover, New York, 1964). In the present work the simplest case of a merge fit-that of an average as obtained from a global fit vs a two-step fit of partitioned data-is used to illustrate that only in the case of a priori weighting do the results have the usually expected and desired statistical properties: normal distributions for residuals,tdistributions for parameters assessed a posteriori, and χ2distributions for variances.
Sexually antagonistic genetic variance for fitness in an ancestral and a novel environment.
Delcourt, Matthieu; Blows, Mark W; Rundle, Howard D
2009-06-07
The intersex genetic correlation for fitness , a standardized measure of the degree to which male and female fitness covary genetically, has consequences for important evolutionary processes, but few estimates are available and none have explored how it changes with environment. Using a half-sibling breeding design, we estimated the genetic (co)variance matrix (G) for male and female fitness, and the resulting , in Drosophila serrata. Our estimates were performed in two environments: the laboratory yeast food to which the population was well adapted and a novel corn food. The major axis of genetic variation for fitness in the two environments, accounting for 51.3 per cent of the total genetic variation, was significant and revealed a strong signal of sexual antagonism, loading negatively in both environments on males but positively on females. Consequently, estimates of were negative in both environments (-0.34 and -0.73, respectively), indicating that the majority of genetic variance segregating in this population has contrasting effects on male and female fitness. The possible strengthening of the negative in this novel environment may be a consequence of no history of selection for amelioration of sexual conflict. Additional studies from a diverse range of novel environments will be needed to determine the generality of this finding.
Teaching Basic Quantum Mechanics in Secondary School Using Concepts of Feynman Path Integrals Method
ERIC Educational Resources Information Center
Fanaro, Maria de los Angeles; Otero, Maria Rita; Arlego, Marcelo
2012-01-01
This paper discusses the teaching of basic quantum mechanics in high school. Rather than following the usual formalism, our approach is based on Feynman's path integral method. Our presentation makes use of simulation software and avoids sophisticated mathematical formalism. (Contains 3 figures.)
Nanotechnology: From Feynman to Funding
ERIC Educational Resources Information Center
Drexler, K. Eric
2004-01-01
The revolutionary Feynman vision of a powerful and general nanotechnology, based on nanomachines that build with atom-by-atom control, promises great opportunities and, if abused, great dangers. This vision made nanotechnology a buzzword and launched the global nanotechnology race. Along the way, however, the meaning of the word has shifted. A…
Bouvet, J-M; Makouanzi, G; Cros, D; Vigneron, Ph
2016-01-01
Hybrids are broadly used in plant breeding and accurate estimation of variance components is crucial for optimizing genetic gain. Genome-wide information may be used to explore models designed to assess the extent of additive and non-additive variance and test their prediction accuracy for the genomic selection. Ten linear mixed models, involving pedigree- and marker-based relationship matrices among parents, were developed to estimate additive (A), dominance (D) and epistatic (AA, AD and DD) effects. Five complementary models, involving the gametic phase to estimate marker-based relationships among hybrid progenies, were developed to assess the same effects. The models were compared using tree height and 3303 single-nucleotide polymorphism markers from 1130 cloned individuals obtained via controlled crosses of 13 Eucalyptus urophylla females with 9 Eucalyptus grandis males. Akaike information criterion (AIC), variance ratios, asymptotic correlation matrices of estimates, goodness-of-fit, prediction accuracy and mean square error (MSE) were used for the comparisons. The variance components and variance ratios differed according to the model. Models with a parent marker-based relationship matrix performed better than those that were pedigree-based, that is, an absence of singularities, lower AIC, higher goodness-of-fit and accuracy and smaller MSE. However, AD and DD variances were estimated with high s.es. Using the same criteria, progeny gametic phase-based models performed better in fitting the observations and predicting genetic values. However, DD variance could not be separated from the dominance variance and null estimates were obtained for AA and AD effects. This study highlighted the advantages of progeny models using genome-wide information. PMID:26328760
Feynman-Kac equations for reaction and diffusion processes
NASA Astrophysics Data System (ADS)
Hou, Ru; Deng, Weihua
2018-04-01
This paper provides a theoretical framework for deriving the forward and backward Feynman-Kac equations for the distribution of functionals of the path of a particle undergoing both diffusion and reaction processes. Once given the diffusion type and reaction rate, a specific forward or backward Feynman-Kac equation can be obtained. The results in this paper include those for normal/anomalous diffusions and reactions with linear/nonlinear rates. Using the derived equations, we apply our findings to compute some physical (experimentally measurable) statistics, including the occupation time in half-space, the first passage time, and the occupation time in half-interval with an absorbing or reflecting boundary, for the physical system with anomalous diffusion and spontaneous evanescence.
From Loops to Trees By-passing Feynman's Theorem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Catani, Stefano; Gleisberg, Tanju; Krauss, Frank
2008-04-22
We derive a duality relation between one-loop integrals and phase-space integrals emerging from them through single cuts. The duality relation is realized by a modification of the customary + i0 prescription of the Feynman propagators. The new prescription regularizing the propagators, which we write in a Lorentz covariant form, compensates for the absence of multiple cut contributions that appear in the Feynman Tree Theorem. The duality relation can be applied to generic one-loop quantities in any relativistic, local and unitary field theories. It is suitable for applications to the analytical calculation of one-loop scattering amplitudes, and to the numerical evaluationmore » of cross-sections at next-to-leading order.« less
General consequences of the violated Feynman scaling
NASA Technical Reports Server (NTRS)
Kamberov, G.; Popova, L.
1985-01-01
The problem of scaling of the hadronic production cross sections represents an outstanding question in high energy physics especially for interpretation of cosmic ray data. A comprehensive analysis of the accelerator data leads to the conclusion of the existence of breaked Feynman scaling. It was proposed that the Lorentz invariant inclusive cross sections for secondaries of a given type approaches constant in respect to a breaked scaling variable x sub s. Thus, the differential cross sections measured in accelerator energy can be extrapolated to higher cosmic ray energies. This assumption leads to some important consequences. The distribution of secondary multiplicity that follows from the violated Feynman scaling using a similar method of Koba et al is discussed.
Engelmann Spruce Site Index Models: A Comparison of Model Functions and Parameterizations
Nigh, Gordon
2015-01-01
Engelmann spruce (Picea engelmannii Parry ex Engelm.) is a high-elevation species found in western Canada and western USA. As this species becomes increasingly targeted for harvesting, better height growth information is required for good management of this species. This project was initiated to fill this need. The objective of the project was threefold: develop a site index model for Engelmann spruce; compare the fits and modelling and application issues between three model formulations and four parameterizations; and more closely examine the grounded-Generalized Algebraic Difference Approach (g-GADA) model parameterization. The model fitting data consisted of 84 stem analyzed Engelmann spruce site trees sampled across the Engelmann Spruce – Subalpine Fir biogeoclimatic zone. The fitted models were based on the Chapman-Richards function, a modified Hossfeld IV function, and the Schumacher function. The model parameterizations that were tested are indicator variables, mixed-effects, GADA, and g-GADA. Model evaluation was based on the finite-sample corrected version of Akaike’s Information Criteria and the estimated variance. Model parameterization had more of an influence on the fit than did model formulation, with the indicator variable method providing the best fit, followed by the mixed-effects modelling (9% increase in the variance for the Chapman-Richards and Schumacher formulations over the indicator variable parameterization), g-GADA (optimal approach) (335% increase in the variance), and the GADA/g-GADA (with the GADA parameterization) (346% increase in the variance). Factors related to the application of the model must be considered when selecting the model for use as the best fitting methods have the most barriers in their application in terms of data and software requirements. PMID:25853472
The Pleasure of Finding Things out
ERIC Educational Resources Information Center
Loxley, Peter
2005-01-01
"The pleasure of finding things out" is a collection of short works by the Nobel Prize winning scientist Richard Feynman. The book provides insights into his infectious enthusiasm for science and his love of sharing ideas about the subject with anyone who wanted to listen. Feynman has been widely acknowledged as one of the greatest physicists of…
The static hard-loop gluon propagator to all orders in anisotropy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nopoush, Mohammad; Guo, Yun; Strickland, Michael
We calculate the (semi-)static hard-loop self-energy and propagator using the Keldysh formalism in a momentum-space anisotropic quark-gluon plasma. The static retarded, advanced, and Feynman (symmetric) self-energies and propagators are calculated to all orders in the momentum-space anisotropy parameter ξ. For the retarded and advanced self-energies/propagators, we present a concise derivation and comparison with previouslyobtained results and extend the calculation of the self-energies to next-to-leading order in the gluon energy, ω. For the Feynman self-energy/propagator, we present new results which are accurate to all orders in ξ. We compare our exact results with prior expressions for the Feynman self-energy/propagator which weremore » obtained using Taylor-expansions around an isotropic state. Here, we show that, unlike the Taylor-expanded results, the all-orders expression for the Feynman propagator is free from infrared singularities. Finally, we discuss the application of our results to the calculation of the imaginary-part of the heavy-quark potential in an anisotropic quark-gluon plasma.« less
Feynman-like rules for calculating n-point correlators of the primordial curvature perturbation
NASA Astrophysics Data System (ADS)
Valenzuela-Toledo, César A.; Rodríguez, Yeinzon; Beltrán Almeida, Juan P.
2011-10-01
A diagrammatic approach to calculate n-point correlators of the primordial curvature perturbation ζ was developed a few years ago following the spirit of the Feynman rules in Quantum Field Theory. The methodology is very useful and time-saving, as it is for the case of the Feynman rules in the particle physics context, but, unfortunately, is not very well known by the cosmology community. In the present work, we extend such an approach in order to include not only scalar field perturbations as the generators of ζ, but also vector field perturbations. The purpose is twofold: first, we would like the diagrammatic approach (which we would call the Feynman-like rules) to become widespread among the cosmology community; second, we intend to give an easy tool to formulate any correlator of ζ for those cases that involve vector field perturbations and that, therefore, may generate prolonged stages of anisotropic expansion and/or important levels of statistical anisotropy. Indeed, the usual way of formulating such correlators, using the Wick's theorem, may become very clutter and time-consuming.
Feynman formulas for semigroups generated by an iterated Laplace operator
NASA Astrophysics Data System (ADS)
Buzinov, M. S.
2017-04-01
In the present paper, we find representations of a one-parameter semigroup generated by a finite sum of iterated Laplace operators and an additive perturbation (the potential). Such semigroups and the evolution equations corresponding to them find applications in the field of physics, chemistry, biology, and pattern recognition. The representations mentioned above are obtained in the form of Feynman formulas, i.e., in the form of a limit of multiple integrals as the multiplicity tends to infinity. The term "Feynman formula" was proposed by Smolyanov. Smolyanov's approach uses Chernoff's theorems. A simple form of representations thus obtained enables one to use them for numerical modeling the dynamics of the evolution system as a method for the approximation of solutions of equations. The problems considered in this note can be treated using the approach suggested by Remizov (see also the monograph of Smolyanov and Shavgulidze on path integrals). The representations (of semigroups) obtained in this way are more complicated than those given by the Feynman formulas; however, it is possible to bypass some analytical difficulties.
The static hard-loop gluon propagator to all orders in anisotropy
Nopoush, Mohammad; Guo, Yun; Strickland, Michael
2017-09-15
We calculate the (semi-)static hard-loop self-energy and propagator using the Keldysh formalism in a momentum-space anisotropic quark-gluon plasma. The static retarded, advanced, and Feynman (symmetric) self-energies and propagators are calculated to all orders in the momentum-space anisotropy parameter ξ. For the retarded and advanced self-energies/propagators, we present a concise derivation and comparison with previouslyobtained results and extend the calculation of the self-energies to next-to-leading order in the gluon energy, ω. For the Feynman self-energy/propagator, we present new results which are accurate to all orders in ξ. We compare our exact results with prior expressions for the Feynman self-energy/propagator which weremore » obtained using Taylor-expansions around an isotropic state. Here, we show that, unlike the Taylor-expanded results, the all-orders expression for the Feynman propagator is free from infrared singularities. Finally, we discuss the application of our results to the calculation of the imaginary-part of the heavy-quark potential in an anisotropic quark-gluon plasma.« less
Time-ordered product expansions for computational stochastic system biology.
Mjolsness, Eric
2013-06-01
The time-ordered product framework of quantum field theory can also be used to understand salient phenomena in stochastic biochemical networks. It is used here to derive Gillespie's stochastic simulation algorithm (SSA) for chemical reaction networks; consequently, the SSA can be interpreted in terms of Feynman diagrams. It is also used here to derive other, more general simulation and parameter-learning algorithms including simulation algorithms for networks of stochastic reaction-like processes operating on parameterized objects, and also hybrid stochastic reaction/differential equation models in which systems of ordinary differential equations evolve the parameters of objects that can also undergo stochastic reactions. Thus, the time-ordered product expansion can be used systematically to derive simulation and parameter-fitting algorithms for stochastic systems.
NASA Astrophysics Data System (ADS)
Sasamal, Trailokya Nath; Singh, Ashutosh Kumar; Ghanekar, Umesh
2018-04-01
Nanotechnologies, remarkably Quantum-dot Cellular Automata (QCA), offer an attractive perspective for future computing technologies. In this paper, QCA is investigated as an implementation method for designing area and power efficient reversible logic gates. The proposed designs achieve superior performance by incorporating a compact 2-input XOR gate. The proposed design for Feynman, Toffoli, and Fredkin gates demonstrates 28.12, 24.4, and 7% reduction in cell count and utilizes 46, 24.4, and 7.6% less area, respectively over previous best designs. Regarding the cell count (area cover) that of the proposed Peres gate and Double Feynman gate are 44.32% (21.5%) and 12% (25%), respectively less than the most compact previous designs. Further, the delay of Fredkin and Toffoli gates is 0.75 clock cycles, which is equal to the delay of the previous best designs. While the Feynman and Double Feynman gates achieve a delay of 0.5 clock cycles, equal to the least delay previous one. Energy analysis confirms that the average energy dissipation of the developed Feynman, Toffoli, and Fredkin gates is 30.80, 18.08, and 4.3% (for 1.0 E k energy level), respectively less compared to best reported designs. This emphasizes the beneficial role of using proposed reversible gates to design complex and power efficient QCA circuits. The QCADesigner tool is used to validate the layout of the proposed designs, and the QCAPro tool is used to evaluate the energy dissipation.
Merecz, Dorota; Andysz, Aleksandra
2012-06-01
[corrected] Person-Environment fit (P-E fit) paradigm, seems to be especially useful in explaining phenomena related to work attitudes and occupational health. The study explores the relationship between a specific facet of P-E fit as Person-Organization fit (P-O fit) and health. Research was conducted on the random sample of 600 employees. Person-Organization Fit Questionnaire was used to asses the level of Person-Organization fit; mental health status was measured by General Health Questionnaire (GHQ-28); and items from Work Ability Index allowed for evaluation of somatic health. Data was analyzed using non parametric statistical tests. The predictive value of P-O fit for various aspects of health was checked by means of linear regression models. A comparison between the groups distinguished on the basis of their somatic and mental health indicators showed significant differences in the level of overall P-O fit (χ(2) = 23.178; p < 0.001) and its subdimensions: for complementary fit (χ(2) = 29.272; p < 0.001), supplementary fit (χ(2) = 23.059; p < 0.001), and identification with organization (χ(2) = 8.688; p = 0.034). From the perspective of mental health, supplementary P-O fit seems to be important for men's well-being and explains almost 9% of variance in GHQ-28 scores, while in women, complementary fit (5% explained variance in women's GHQ score) and identification with organization (1% explained variance in GHQ score) are significant predictors of mental well-being. Interestingly, better supplementary and complementary fit are related to better mental health, but stronger identification with organization in women produces adverse effect on their mental health. The results show that obtaining the optimal level of P-O fit can be beneficial not only for the organization (e.g. lower turnover, better work effectiveness and commitment), but also for the employees themselves. Optimal level of P-O fit can be considered as a factor maintaining workers' health. However, prospective research is needed to confirm the results obtained in this exploratory study.
Allen, Scott L; McGuigan, Katrina; Connallon, Tim; Blows, Mark W; Chenoweth, Stephen F
2017-10-01
A proposed benefit to sexual selection is that it promotes purging of deleterious mutations from populations. For this benefit to be realized, sexual selection, which is usually stronger on males, must purge mutations deleterious to both sexes. Here, we experimentally test the hypothesis that sexual selection on males purges deleterious mutations that affect both male and female fitness. We measured male and female fitness in two panels of spontaneous mutation-accumulation lines of the fly, Drosophila serrata, each established from a common ancestor. One panel of mutation accumulation lines limited both natural and sexual selection (LS lines), whereas the other panel limited natural selection, but allowed sexual selection to operate (SS lines). Although mutation accumulation caused a significant reduction in male and female fitness in both the LS and SS lines, sexual selection had no detectable effect on the extent of the fitness reduction. Similarly, despite evidence of mutational variance for fitness in males and females of both treatments, sexual selection had no significant impact on the amount of mutational genetic variance for fitness. However, sexual selection did reshape the between-sex correlation for fitness: significantly strengthening it in the SS lines. After 25 generations, the between-sex correlation for fitness was positive but considerably less than one in the LS lines, suggesting that, although most mutations had sexually concordant fitness effects, sex-limited, and/or sex-biased mutations contributed substantially to the mutational variance. In the SS lines this correlation was strong and could not be distinguished from unity. Individual-based simulations that mimick the experimental setup reveal two conditions that may drive our results: (1) a modest-to-large fraction of mutations have sex-limited (or highly sex-biased) fitness effects, and (2) the average fitness effect of sex-limited mutations is larger than the average fitness effect of mutations that affect both sexes similarly. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
On the Presentation of Wave Phenomena of Electrons with the Young-Feynman Experiment
ERIC Educational Resources Information Center
Matteucci, Giorgio
2011-01-01
The Young-Feynman two-hole interferometer is widely used to present electron wave-particle duality and, in particular, the buildup of interference fringes with single electrons. The teaching approach consists of two steps: (i) electrons come through only one hole but diffraction effects are disregarded and (ii) electrons come through both holes…
Teaching Electron--Positron--Photon Interactions with Hands-on Feynman Diagrams
ERIC Educational Resources Information Center
Kontokostas, George; Kalkanis, George
2013-01-01
Feynman diagrams are introduced in many physics textbooks, such as those by Alonso and Finn and Serway, and their use in physics education has been discussed by various authors. They have an appealing simplicity and can give insight into events in the microworld. Yet students often do not understand their significance and often cannot combine the…
ERIC Educational Resources Information Center
Pascolini, A.; Pietroni, M.
2002-01-01
We report on an educational project in particle physics based on Feynman diagrams. By dropping the mathematical aspect of the method and keeping just the iconic one, it is possible to convey many different concepts from the world of elementary particles, such as antimatter, conservation laws, particle creation and destruction, real and virtual…
Solving differential equations for Feynman integrals by expansions near singular points
NASA Astrophysics Data System (ADS)
Lee, Roman N.; Smirnov, Alexander V.; Smirnov, Vladimir A.
2018-03-01
We describe a strategy to solve differential equations for Feynman integrals by powers series expansions near singular points and to obtain high precision results for the corresponding master integrals. We consider Feynman integrals with two scales, i.e. non-trivially depending on one variable. The corresponding algorithm is oriented at situations where canonical form of the differential equations is impossible. We provide a computer code constructed with the help of our algorithm for a simple example of four-loop generalized sunset integrals with three equal non-zero masses and two zero masses. Our code gives values of the master integrals at any given point on the real axis with a required accuracy and a given order of expansion in the regularization parameter ɛ.
Connallon, Tim; Clark, Andrew G.
2012-01-01
Antagonistically selected alleles -- those with opposing fitness effects between sexes, environments, or fitness components -- represent an important component of additive genetic variance in fitness-related traits, with stably balanced polymorphisms often hypothesized to contribute to observed quantitative genetic variation. Balancing selection hypotheses imply that intermediate-frequency alleles disproportionately contribute to genetic variance of life history traits and fitness. Such alleles may also associate with population genetic footprints of recent selection, including reduced genetic diversity and inflated linkage disequilibrium at linked, neutral sites. Here, we compare the evolutionary dynamics of different balancing selection models, and characterize the evolutionary timescale and hitchhiking effects of partial selective sweeps generated under antagonistic versus non-antagonistic (e.g., overdominant and frequency-dependent selection) processes. We show that that the evolutionary timescales of partial sweeps tend to be much longer, and hitchhiking effects are drastically weaker, under scenarios of antagonistic selection. These results predict an interesting mismatch between molecular population genetic and quantitative genetic patterns of variation. Balanced, antagonistically selected alleles are expected to contribute more to additive genetic variance for fitness than alleles maintained by classic, non-antagonistic mechanisms. Nevertheless, classical mechanisms of balancing selection are much more likely to generate strong population genetic signatures of recent balancing selection. PMID:23461340
Wheelwright, Nathaniel T; Keller, Lukas F; Postma, Erik
2014-11-01
The heritability (h(2) ) of fitness traits is often low. Although this has been attributed to directional selection having eroded genetic variation in direct proportion to the strength of selection, heritability does not necessarily reflect a trait's additive genetic variance and evolutionary potential ("evolvability"). Recent studies suggest that the low h(2) of fitness traits in wild populations is caused not by a paucity of additive genetic variance (VA ) but by greater environmental or nonadditive genetic variance (VR ). We examined the relationship between h(2) and variance-standardized selection intensities (i or βσ ), and between evolvability (IA :VA divided by squared phenotypic trait mean) and mean-standardized selection gradients (βμ ). Using 24 years of data from an island population of Savannah sparrows, we show that, across diverse traits, h(2) declines with the strength of selection, whereas IA and IR (VR divided by squared trait mean) are independent of the strength of selection. Within trait types (morphological, reproductive, life-history), h(2) , IA , and IR are all independent of the strength of selection. This indicates that certain traits have low heritability because of increased residual variance due to the age at which they are expressed or the multiple factors influencing their expression, rather than their association with fitness. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.
ERIC Educational Resources Information Center
Onorato, P.
2011-01-01
An introduction to quantum mechanics based on the sum-over-paths (SOP) method originated by Richard P. Feynman and developed by E. F. Taylor and coworkers is presented. The Einstein-Brillouin-Keller (EBK) semiclassical quantization rules are obtained following the SOP approach for bounded systems, and a general approach to the calculation of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flego, S.P.; Plastino, A.; Universitat de les Illes Balears and IFISC-CSIC, 07122 Palma de Mallorca
We explore intriguing links connecting Hellmann-Feynman's theorem to a thermodynamics information-optimizing principle based on Fisher's information measure. - Highlights: > We link a purely quantum mechanical result, the Hellmann-Feynman theorem, with Jaynes' information theoretical reciprocity relations. > These relations involve the coefficients of a series expansion of the potential function. > We suggest the existence of a Legendre transform structure behind Schroedinger's equation, akin to the one characterizing thermodynamics.
Feynman amplitudes and limits of heights
NASA Astrophysics Data System (ADS)
Amini, O.; Bloch, S. J.; Burgos Gil, J. I.; Fresán, J.
2016-10-01
We investigate from a mathematical perspective how Feynman amplitudes appear in the low-energy limit of string amplitudes. In this paper, we prove the convergence of the integrands. We derive this from results describing the asymptotic behaviour of the height pairing between degree-zero divisors, as a family of curves degenerates. These are obtained by means of the nilpotent orbit theorem in Hodge theory.
ERIC Educational Resources Information Center
Field, J. H.
2011-01-01
It is shown how the time-dependent Schrodinger equation may be simply derived from the dynamical postulate of Feynman's path integral formulation of quantum mechanics and the Hamilton-Jacobi equation of classical mechanics. Schrodinger's own published derivations of quantum wave equations, the first of which was also based on the Hamilton-Jacobi…
Importance sampling studies of helium using the Feynman-Kac path integral method
NASA Astrophysics Data System (ADS)
Datta, S.; Rejcek, J. M.
2018-05-01
In the Feynman-Kac path integral approach the eigenvalues of a quantum system can be computed using Wiener measure which uses Brownian particle motion. In our previous work on such systems we have observed that the Wiener process numerically converges slowly for dimensions greater than two because almost all trajectories will escape to infinity. One can speed up this process by using a generalized Feynman-Kac (GFK) method, in which the new measure associated with the trial function is stationary, so that the convergence rate becomes much faster. We thus achieve an example of "importance sampling" and, in the present work, we apply it to the Feynman-Kac (FK) path integrals for the ground and first few excited-state energies for He to speed up the convergence rate. We calculate the path integrals using space averaging rather than the time averaging as done in the past. The best previous calculations from variational computations report precisions of 10-16 Hartrees, whereas in most cases our path integral results obtained for the ground and first excited states of He are lower than these results by about 10-6 Hartrees or more.
Coupled oscillators and Feynman's three papers
NASA Astrophysics Data System (ADS)
Kim, Y. S.
2007-05-01
According to Richard Feynman, the adventure of our science of physics is a perpetual attempt to recognize that the different aspects of nature are really different aspects of the same thing. It is therefore interesting to combine some, if not all, of Feynman's papers into one. The first of his three papers is on the "rest of the universe" contained in his 1972 book on statistical mechanics. The second idea is Feynman's parton picture which he presented in 1969 at the Stony Brook conference on high-energy physics. The third idea is contained in the 1971 paper he published with his students, where they show that the hadronic spectra on Regge trajectories are manifestations of harmonic-oscillator degeneracies. In this report, we formulate these three ideas using the mathematics of two coupled oscillators. It is shown that the idea of entanglement is contained in his rest of the universe, and can be extended to a space-time entanglement. It is shown also that his parton model and the static quark model can be combined into one Lorentz-covariant entity. Furthermore, Einstein's special relativity, based on the Lorentz group, can also be formulated within the mathematical framework of two coupled oscillators.
NASA Astrophysics Data System (ADS)
Yang, Fanlin; Zhao, Chunxia; Zhang, Kai; Feng, Chengkai; Ma, Yue
2017-07-01
Acoustic seafloor classification with multibeam backscatter measurements is an attractive approach for mapping seafloor properties over a large area. However, artifacts in the multibeam backscatter measurements prevent accurate characterization of the seafloor. In particular, the backscatter level is extremely strong and highly variable in the near-nadir region due to the specular echo phenomenon. Consequently, striped artifacts emerge in the backscatter image, which can degrade the classification accuracy. This study focuses on the striped artifacts in multibeam backscatter images. To this end, a calibration algorithm based on equal mean-variance fitting is developed. By fitting the local shape of the angular response curve, the striped artifacts are compressed and moved according to the relations between the mean and variance in the near-nadir and off-nadir region. The algorithm utilized the measured data of near-nadir region and retained the basic shape of the response curve. The experimental results verify the high performance of the proposed method.
NASA Astrophysics Data System (ADS)
Wang, Xiu-Xia
2016-02-01
By employing the generalized Hellmann-Feynman theorem, the quantization of mesoscopic complicated coupling circuit is proposed. The ensemble average energy, the energy fluctuation and the energy distribution are investigated at finite temperature. It is shown that the generalized Hellmann-Feynman theorem plays the key role in quantizing a mesoscopic complicated coupling circuit at finite temperature, and when the temperature is lower than the specific temperature, the value of (\\vartriangle {hat {H}})2 is almost zero and the values of
Monogamy has a fixation advantage based on fitness variance in an ideal promiscuity group.
Garay, József; Móri, Tamás F
2012-11-01
We consider an ideal promiscuity group of females, which implies that all males have the same average mating success. If females have concealed ovulation, then the males' paternity chances are equal. We find that male-based monogamy will be fixed in females' promiscuity group when the stochastic Darwinian selection is described by a Markov chain.We point out that in huge populations the relative advantage (difference between average fitness of different strategies) determines primarily the end of evolution; in the case of neutrality (means are equal) the smallest variance guarantees fixation (absorption) advantage; when the means and variances are the same, then the higher third moment determines which types will be fixed in the Markov chains.
Gao, Zan
2008-10-01
This study investigated the predictive strength of perceived competence and enjoyment on students' physical activity and cardiorespiratory fitness in physical education classes. Participants (N = 307; 101 in Grade 6, 96 in Grade 7, 110 in Grade 8; 149 boys, 158 girls) responded to questionnaires assessing perceived competence and enjoyment of physical education, then their cardiorespiratory fitness was assessed on the Progressive Aerobic Cardiovascular Endurance Run (PACER) test. Physical activity in one class was estimated via pedometers. Regression analyses showed enjoyment (R2 = 16.5) and perceived competence (R2 = 4.2) accounted for significant variance of only 20.7% of physical activity and, perceived competence was the only significant contributor to cardiorespiratory fitness performance (R2 = 19.3%). Only a small amount of variance here leaves 80% unaccounted for. Some educational implications and areas for research are mentioned.
Generalizations of polylogarithms for Feynman integrals
NASA Astrophysics Data System (ADS)
Bogner, Christian
2016-10-01
In this talk, we discuss recent progress in the application of generalizations of polylogarithms in the symbolic computation of multi-loop integrals. We briefly review the Maple program MPL which supports a certain approach for the computation of Feynman integrals in terms of multiple polylogarithms. Furthermore we discuss elliptic generalizations of polylogarithms which have shown to be useful in the computation of the massive two-loop sunrise integral.
ERIC Educational Resources Information Center
Fanaro, Maria de los Angeles; Arlego, Marcelo; Otero, Maria Rita
2012-01-01
This work comprises an investigation about basic Quantum Mechanics (QM) teaching in the high school. The organization of the concepts does not follow a historical line. The Path Integrals method of Feynman has been adopted as a Reference Conceptual Structure that is an alternative to the canonical formalism. We have designed a didactic sequence…
Feynman-Kac equation for anomalous processes with space- and time-dependent forces
NASA Astrophysics Data System (ADS)
Cairoli, Andrea; Baule, Adrian
2017-04-01
Functionals of a stochastic process Y(t) model many physical time-extensive observables, for instance particle positions, local and occupation times or accumulated mechanical work. When Y(t) is a normal diffusive process, their statistics are obtained as the solution of the celebrated Feynman-Kac equation. This equation provides the crucial link between the expected values of diffusion processes and the solutions of deterministic second-order partial differential equations. When Y(t) is non-Brownian, e.g. an anomalous diffusive process, generalizations of the Feynman-Kac equation that incorporate power-law or more general waiting time distributions of the underlying random walk have recently been derived. A general representation of such waiting times is provided in terms of a Lévy process whose Laplace exponent is directly related to the memory kernel appearing in the generalized Feynman-Kac equation. The corresponding anomalous processes have been shown to capture nonlinear mean square displacements exhibiting crossovers between different scaling regimes, which have been observed in numerous experiments on biological systems like migrating cells or diffusing macromolecules in intracellular environments. However, the case where both space- and time-dependent forces drive the dynamics of the generalized anomalous process has not been solved yet. Here, we present the missing derivation of the Feynman-Kac equation in such general case by using the subordination technique. Furthermore, we discuss its extension to functionals explicitly depending on time, which are of particular relevance for the stochastic thermodynamics of anomalous diffusive systems. Exact results on the work fluctuations of a simple non-equilibrium model are obtained. An additional aim of this paper is to provide a pedagogical introduction to Lévy processes, semimartingales and their associated stochastic calculus, which underlie the mathematical formulation of anomalous diffusion as a subordinated process.
Minimum variance geographic sampling
NASA Technical Reports Server (NTRS)
Terrell, G. R. (Principal Investigator)
1980-01-01
Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.
Supermanifolds from Feynman graphs
NASA Astrophysics Data System (ADS)
Marcolli, Matilde; Rej, Abhijnan
2008-08-01
We generalize the computation of Feynman integrals of log divergent graphs in terms of the Kirchhoff polynomial to the case of graphs with both fermionic and bosonic edges, to which we assign a set of ordinary and Grassmann variables. This procedure gives a computation of the Feynman integrals in terms of a period on a supermanifold, for graphs admitting a basis of the first homology satisfying a condition generalizing the log divergence in this context. The analog in this setting of the graph hypersurfaces is a graph supermanifold given by the divisor of zeros and poles of the Berezinian of a matrix associated with the graph, inside a superprojective space. We introduce a Grothendieck group for supermanifolds and identify the subgroup generated by the graph supermanifolds. This can be seen as a general procedure for constructing interesting classes of supermanifolds with associated periods.
New graph polynomials in parametric QED Feynman integrals
NASA Astrophysics Data System (ADS)
Golz, Marcel
2017-10-01
In recent years enormous progress has been made in perturbative quantum field theory by applying methods of algebraic geometry to parametric Feynman integrals for scalar theories. The transition to gauge theories is complicated not only by the fact that their parametric integrand is much larger and more involved. It is, moreover, only implicitly given as the result of certain differential operators applied to the scalar integrand exp(-ΦΓ /ΨΓ) , where ΨΓ and ΦΓ are the Kirchhoff and Symanzik polynomials of the Feynman graph Γ. In the case of quantum electrodynamics we find that the full parametric integrand inherits a rich combinatorial structure from ΨΓ and ΦΓ. In the end, it can be expressed explicitly as a sum over products of new types of graph polynomials which have a combinatoric interpretation via simple cycle subgraphs of Γ.
ERIC Educational Resources Information Center
Welk, Gregory J.; Meredith, Marilu D.; Ihmels, Michelle; Seeger, Chris
2010-01-01
This study examined demographic and geographic variability in aggregated school-level data on the percentage of students achieving the FITNESSGRAM[R] Healthy Fitness Zones[TM] (HFZ). Three-way analyses of variance were used to examine differences in fitness achievement rates among schools that had distinct diversity and socioeconomic status…
Evaluation of marginal fit of two all-ceramic copings with two finish lines
Subasi, Gulce; Ozturk, Nilgun; Inan, Ozgur; Bozogullari, Nalan
2012-01-01
Objectives: This in-vitro study investigated the marginal fit of two all-ceramic copings with 2 finish line designs. Methods: Forty machined stainless steel molar die models with two different margin designs (chamfer and rounded shoulder) were prepared. A total of 40 standardized copings were fabricated and divided into 4 groups (n=10 for each finish line-coping material). Coping materials tested were IPS e.max Press and Zirkonzahn; luting agent was Variolink II. Marginal fit was evaluated after cementation with a stereomicroscope (Leica MZ16). Two-way analysis of variance and Tukey-HSD test were performed to assess the influence of each finish line design and ceramic type on the marginal fit of 2 all-ceramic copings (α =.05). Results: Two-way analysis of variance revealed no statistically significant differences for marginal fit relative to finish lines (P=.362) and ceramic types (P=.065). Conclusion: Within the limitations of this study, both types of all-ceramic copings demonstrated that the mean marginal fit was considered acceptable for clinical application (⩽120 μm). PMID:22509119
Variance-based selection may explain general mating patterns in social insects.
Rueppell, Olav; Johnson, Nels; Rychtár, Jan
2008-06-23
Female mating frequency is one of the key parameters of social insect evolution. Several hypotheses have been suggested to explain multiple mating and considerable empirical research has led to conflicting results. Building on several earlier analyses, we present a simple general model that links the number of queen matings to variance in colony performance and this variance to average colony fitness. The model predicts selection for multiple mating if the average colony succeeds in a focal task, and selection for single mating if the average colony fails, irrespective of the proximate mechanism that links genetic diversity to colony fitness. Empirical support comes from interspecific comparisons, e.g. between the bee genera Apis and Bombus, and from data on several ant species, but more comprehensive empirical tests are needed.
ERIC Educational Resources Information Center
Malgieri, Massimiliano; Tenni, Antonio; Onorato, Pasquale; De Ambrosis, Anna
2016-01-01
In this paper we present a reasoning line for introducing the Pauli exclusion principle in the context of an introductory course on quantum theory based on the sum over paths approach. We start from the argument originally introduced by Feynman in "QED: The Strange Theory of Light and Matter" and improve it by discussing with students…
Application of Generalized Feynman-Hellmann Theorem in Quantization of LC Circuit in Thermo Bath
NASA Astrophysics Data System (ADS)
Fan, Hong-Yi; Tang, Xu-Bing
For the quantized LC electric circuit, when taking the Joule thermal effect into account, we think that physical observables should be evaluated in the context of ensemble average. We then use the generalized Feynman-Hellmann theorem for ensemble average to calculate them, which seems convenient. Fluctuation of observables in various LC electric circuits in the presence of thermo bath growing with temperature is exhibited.
Geometry, Heat Equation and Path Integrals on the Poincaré Upper Half-Plane
NASA Astrophysics Data System (ADS)
Kubo, R.
1988-01-01
Geometry, heat equation and Feynman's path integrals are studied on the Poincaré upper half-plane. The fundamental solution to the heat equation partial f/partial t = Delta_{H} f is expressed in terms of a path integral defined on the upper half-plane. It is shown that Kac's statement that Feynman's path integral satisfies the Schrödinger equation is also valid for our case.
A test of the Feynman scaling in the fragmentation region
NASA Technical Reports Server (NTRS)
Doke, T.; Innocente, V.; Kasahara, K.; Kikuchi, J.; Kashiwagi, T.; Lanzano, S.; Masuda, K.; Murakami, H.; Muraki, Y.; Nakada, T.
1985-01-01
The result of the direct measurement of the fragmentation region will be presented. The result will be obtained at the CERN proton-antiproton collider, being exposured the Silicon calorimeters inside beam pipe. This experiment clarifies a long riddle of cosmic ray physics, whether the Feynman scaling does villate at the fragmentation region or the Iron component is increasing at 10 to the 15th power eV.
Misic, Mark M; Rosengren, Karl S; Woods, Jeffrey A; Evans, Ellen M
2007-01-01
Muscle mass, strength and fitness play a role in lower-extremity physical function (LEPF) in older adults; however, the relationships remain inadequately characterized. This study aimed to examine the relationships between leg mineral free lean mass (MFLM(LEG)), leg muscle quality (leg strength normalized for MFLM(LEG)), adiposity, aerobic fitness and LEPF in community-dwelling healthy elderly subjects. Fifty-five older adults (69.3 +/- 5.5 years, 36 females, 19 males) were assessed for leg strength using an isokinetic dynamometer, body composition by dual energy X-ray absorptiometry and aerobic fitness via a treadmill maximal oxygen consumption test. LEPF was assessed using computerized dynamic posturography and stair ascent/descent, a timed up-and-go task and a 7-meter walk with and without an obstacle. Muscle strength, muscle quality and aerobic fitness were similarly correlated with static LEPF tests (r range 0.27-0.40, p < 0.05); however, the strength of the independent predictors was not robust with explained variance ranging from 9 to 16%. Muscle quality was the strongest correlate of all dynamic LEPF tests (r range 0.54-0.65, p < 0.001). Using stepwise linear regression analysis, muscle quality was the strongest independent predictor of dynamic physical function explaining 29-42% of the variance (p < 0.001), whereas aerobic fitness or body fat mass explained 5-6% of the variance (p < 0.05) depending on performance measure. Muscle quality is the most important predictor, and aerobic fitness and fat mass are secondary predictors of LEPF in community-dwelling older adults. These findings support the importance of exercise, especially strength training, for optimal body composition, and maintenance of strength and physical function in older adults.
Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D
2016-05-01
Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with the estimation process rendered results from the BLQ model questionable. Importantly, accounting for heterogeneous variance enhanced inferential precision as the breadth of the confidence interval for the mean breakpoint decreased by approximately 44%. In summary, the article illustrates the use of linear and nonlinear mixed models for dose-response relationships accounting for heterogeneous residual variances, discusses important diagnostics and their implications for inference, and provides practical recommendations for computational troubleshooting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finley, C; Dave, J
Purpose: To characterize noise for image receptors of digital radiography systems based on pixel variance. Methods: Nine calibrated digital image receptors associated with nine new portable digital radiography systems (Carestream Health, Inc., Rochester, NY) were used in this study. For each image receptor, thirteen images were acquired with RQA5 beam conditions for input detector air kerma ranging from 0 to 110 µGy, and linearized ‘For Processing’ images were extracted. Mean pixel value (MPV), standard deviation (SD) and relative noise (SD/MPV) were obtained from each image using ROI sizes varying from 2.5×2.5 to 20×20 mm{sup 2}. Variance (SD{sup 2}) was plottedmore » as a function of input detector air kerma and the coefficients of the quadratic fit were used to derive structured, quantum and electronic noise coefficients. Relative noise was also fitted as a function of input detector air kerma to identify noise sources. The fitting functions used least-squares approach. Results: The coefficient of variation values obtained using different ROI sizes was less than 1% for all the images. The structured, quantum and electronic coefficients obtained from the quadratic fit of variance (r>0.97) were 0.43±0.10, 3.95±0.27 and 2.89±0.74 (mean ± standard deviation), respectively, indicating that overall the quantum noise was the dominant noise source. However, for one system electronic noise coefficient (3.91) was greater than quantum noise coefficient (3.56) indicating electronic noise to be dominant. Using relative noise values, the power parameter of the fitting equation (|r|>0.93) showed a mean and standard deviation of 0.46±0.02. A 0.50 value for this power parameter indicates quantum noise to be the dominant noise source whereas values around 0.50 indicate presence of other noise sources. Conclusion: Characterizing noise from pixel variance assists in identifying contributions from various noise sources that, eventually, may affect image quality. This approach may be integrated during periodic quality assessments of digital image receptors.« less
Merton's problem for an investor with a benchmark in a Barndorff-Nielsen and Shephard market.
Lennartsson, Jan; Lindberg, Carl
2015-01-01
To try to outperform an externally given benchmark with known weights is the most common equity mandate in the financial industry. For quantitative investors, this task is predominantly approached by optimizing their portfolios consecutively over short time horizons with one-period models. We seek in this paper to provide a theoretical justification to this practice when the underlying market is of Barndorff-Nielsen and Shephard type. This is done by verifying that an investor who seeks to maximize her expected terminal exponential utility of wealth in excess of her benchmark will in fact use an optimal portfolio equivalent to the one-period Markowitz mean-variance problem in continuum under the corresponding Black-Scholes market. Further, we can represent the solution to the optimization problem as in Feynman-Kac form. Hence, the problem, and its solution, is analogous to Merton's classical portfolio problem, with the main difference that Merton maximizes expected utility of terminal wealth, not wealth in excess of a benchmark.
Automated generation of lattice QCD Feynman rules
NASA Astrophysics Data System (ADS)
Hart, A.; von Hippel, G. M.; Horgan, R. R.; Müller, E. H.
2009-12-01
The derivation of the Feynman rules for lattice perturbation theory from actions and operators is complicated, especially for highly improved actions such as HISQ. This task is, however, both important and particularly suitable for automation. We describe a suite of software to generate and evaluate Feynman rules for a wide range of lattice field theories with gluons and (relativistic and/or heavy) quarks. Our programs are capable of dealing with actions as complicated as (m)NRQCD and HISQ. Automated differentiation methods are used to calculate also the derivatives of Feynman diagrams. Program summaryProgram title: HiPPY, HPsrc Catalogue identifier: AEDX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPLv2 (see Additional comments below) No. of lines in distributed program, including test data, etc.: 513 426 No. of bytes in distributed program, including test data, etc.: 4 893 707 Distribution format: tar.gz Programming language: Python, Fortran95 Computer: HiPPy: Single-processor workstations. HPsrc: Single-processor workstations and MPI-enabled multi-processor systems Operating system: HiPPy: Any for which Python v2.5.x is available. HPsrc: Any for which a standards-compliant Fortran95 compiler is available Has the code been vectorised or parallelised?: Yes RAM: Problem specific, typically less than 1 GB for either code Classification: 4.4, 11.5 Nature of problem: Derivation and use of perturbative Feynman rules for complicated lattice QCD actions. Solution method: An automated expansion method implemented in Python (HiPPy) and code to use expansions to generate Feynman rules in Fortran95 (HPsrc). Restrictions: No general restrictions. Specific restrictions are discussed in the text. Additional comments: The HiPPy and HPsrc codes are released under the second version of the GNU General Public Licence (GPL v2). Therefore anyone is free to use or modify the code for their own calculations. As part of the licensing, we ask that any publications including results from the use of this code or of modifications of it cite Refs. [1,2] as well as this paper. Finally, we also ask that details of these publications, as well as of any bugs or required or useful improvements of this core code, would be communicated to us. Running time: Very problem specific, depending on the complexity of the Feynman rules and the number of integration points. Typically between a few minutes and several weeks. The installation tests provided with the program code take only a few seconds to run. References:A. Hart, G.M. von Hippel, R.R. Horgan, L.C. Storoni, Automatically generating Feynman rules for improved lattice eld theories, J. Comput. Phys. 209 (2005) 340-353, doi:10.1016/j.jcp.2005.03.010, arXiv:hep-lat/0411026. M. Lüscher, P. Weisz, Efficient Numerical Techniques for Perturbative Lattice Gauge Theory Computations, Nucl. Phys. B 266 (1986) 309, doi:10.1016/0550-3213(86)90094-5.
A massive Feynman integral and some reduction relations for Appell functions
NASA Astrophysics Data System (ADS)
Shpot, M. A.
2007-12-01
New explicit expressions are derived for the one-loop two-point Feynman integral with arbitrary external momentum and masses m12 and m22 in D dimensions. The results are given in terms of Appell functions, manifestly symmetric with respect to the masses mi2. Equating our expressions with previously known results in terms of Gauss hypergeometric functions yields reduction relations for the involved Appell functions that are apparently new mathematical results.
New Tools for Forecasting Old Physics at the LHC
Dixon, Lance
2018-05-21
For the LHC to uncover many types of new physics, the "old physics" produced by the Standard Model must be understood very well. For decades, the central theoretical tool for this job was the Feynman diagram expansion. However, Feynman diagrams are just too slow, even on fast computers, to allow adequate precision for complicated LHC events with many jets in the final state. Such events are already visible in the initial LHC data. Over the past few years, alternative methods to Feynman diagrams have come to fruition. These new "on-shell" methods are based on the old principles of unitarity and factorization. They can be much more efficient because they exploit the underlying simplicity of scattering amplitudes, and recycle lower-loop information. I will describe how and why these methods work, and present some of the recent state-of-the-art results that have been obtained with them.
Genomic Prediction Accounting for Residual Heteroskedasticity
Ou, Zhining; Tempelman, Robert J.; Steibel, Juan P.; Ernst, Catherine W.; Bates, Ronald O.; Bello, Nora M.
2015-01-01
Whole-genome prediction (WGP) models that use single-nucleotide polymorphism marker information to predict genetic merit of animals and plants typically assume homogeneous residual variance. However, variability is often heterogeneous across agricultural production systems and may subsequently bias WGP-based inferences. This study extends classical WGP models based on normality, heavy-tailed specifications and variable selection to explicitly account for environmentally-driven residual heteroskedasticity under a hierarchical Bayesian mixed-models framework. WGP models assuming homogeneous or heterogeneous residual variances were fitted to training data generated under simulation scenarios reflecting a gradient of increasing heteroskedasticity. Model fit was based on pseudo-Bayes factors and also on prediction accuracy of genomic breeding values computed on a validation data subset one generation removed from the simulated training dataset. Homogeneous vs. heterogeneous residual variance WGP models were also fitted to two quantitative traits, namely 45-min postmortem carcass temperature and loin muscle pH, recorded in a swine resource population dataset prescreened for high and mild residual heteroskedasticity, respectively. Fit of competing WGP models was compared using pseudo-Bayes factors. Predictive ability, defined as the correlation between predicted and observed phenotypes in validation sets of a five-fold cross-validation was also computed. Heteroskedastic error WGP models showed improved model fit and enhanced prediction accuracy compared to homoskedastic error WGP models although the magnitude of the improvement was small (less than two percentage points net gain in prediction accuracy). Nevertheless, accounting for residual heteroskedasticity did improve accuracy of selection, especially on individuals of extreme genetic merit. PMID:26564950
The effect of noise-induced variance on parameter recovery from reaction times.
Vadillo, Miguel A; Garaizar, Pablo
2016-03-31
Technical noise can compromise the precision and accuracy of the reaction times collected in psychological experiments, especially in the case of Internet-based studies. Although this noise seems to have only a small impact on traditional statistical analyses, its effects on model fit to reaction-time distributions remains unexplored. Across four simulations we study the impact of technical noise on parameter recovery from data generated from an ex-Gaussian distribution and from a Ratcliff Diffusion Model. Our results suggest that the impact of noise-induced variance tends to be limited to specific parameters and conditions. Although we encourage researchers to adopt all measures to reduce the impact of noise on reaction-time experiments, we conclude that the typical amount of noise-induced variance found in these experiments does not pose substantial problems for statistical analyses based on model fitting.
Chaudhuri, Shomesh E; Merfeld, Daniel M
2013-03-01
Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.
NASA Astrophysics Data System (ADS)
Fan, Hong-Yi; Xu, Xue-Xiang; Hu, Li-Yun
2010-06-01
By virtue of the generalized Hellmann-Feynman theorem for the ensemble average, we obtain the internal energy and average energy consumed by the resistance R in a quantized resistance-inductance-capacitance (RLC) electric circuit. We also calculate the entropy-variation with R. The relation between entropy and R is also derived. By the use of figures we indeed see that the entropy increases with the increment of R.
Destructive interferences results in bosons anti bunching: refining Feynman's argument
NASA Astrophysics Data System (ADS)
Marchewka, Avi; Granot, Er'el
2014-09-01
The effect of boson bunching is frequently mentioned and discussed in the literature. This effect is the manifestation of bosons tendency to "travel" in clusters. One of the core arguments for boson bunching was formulated by Feynman in his well-known lecture series and has been frequently used ever since. By comparing the scattering probabilities of two bosons and of two distinguishable particles, he concluded: "We have the result that it is twice as likely to find two identical Bose particles scattered into the same state as you would calculate assuming the particles were different" [R.P. Feynman, R.B. Leighton, M. Sands, The Feynman Lectures on Physics: Quantum mechanics (Addison-Wesley, 1965)]. This argument was rooted in the scientific community (see for example [C. Cohen-Tannoudji, B. Diu, F. Laloë, Quantum Mechanics (John Wiley & Sons, Paris, 1977); W. Pauli, Exclusion Principle and Quantum Mechanics, Nobel Lecture (1946)]), however, while this sentence is completely valid, as is proved in [C. Cohen-Tannoudji, B. Diu, F. Laloë, Quantum Mechanics (John Wiley & Sons, Paris, 1977)], it is not a synonym of bunching. In fact, as it is shown in this paper, wherever one of the wavefunctions has a zero, bosons can anti-bunch and fermions can bunch. It should be stressed that zeros in the wavefunctions are ubiquitous in Quantum Mechanics and therefore the effect should be common. Several scenarios are suggested to witness the effect.
Bruning, Andrea; Gaitán-Espitia, Juan Diego; González, Avia; Bartheld, José Luis; Nespolo, Roberto F
2013-01-01
Life-history evolution-the way organisms allocate time and energy to reproduction, survival, and growth-is a central question in evolutionary biology. One of its main tenets, the allocation principle, predicts that selection will reduce energy costs of maintenance in order to divert energy to survival and reproduction. The empirical support for this principle is the existence of a negative relationship between fitness and metabolic rate, which has been observed in some ectotherms. In juvenile animals, a key function affecting fitness is growth rate, since fast growers will reproduce sooner and maximize survival. In principle, design constraints dictate that growth rate cannot be reduced without affecting maintenance costs. Hence, it is predicted that juveniles will show a positive relationship between fitness (growth rate) and metabolic rate, contrarily to what has been observed in adults. Here we explored this problem using land snails (Cornu aspersum). We estimated the additive genetic variance-covariance matrix for growth and standard metabolic rate (SMR; rate of CO2 production) using 34 half-sibling families. We measured eggs, hatchlings, and juveniles in 208 offspring that were isolated right after egg laying (i.e., minimizing maternal and common environmental variance). Surprisingly, our results showed that additive genetic effects (narrow-sense heritabilities, h(2)) and additive genetic correlations (rG) were small and nonsignificant. However, the nonadditive proportion of phenotypic variances and correlations (rC) were unexpectedly large and significant. In fact, nonadditive genetic effects were positive for growth rate and SMR ([Formula: see text]; [Formula: see text]), supporting the idea that fitness (growth rate) cannot be maximized without incurring maintenance costs. Large nonadditive genetic variances could result as a consequence of selection eroding the additive genetic component, which suggests that past selection could have produced nonadditive genetic correlation. It is predicted that this correlation is reduced when adulthood is attained and selection starts to promote the reduction in metabolic rate.
ERIC Educational Resources Information Center
Heene, Moritz; Hilbert, Sven; Draxler, Clemens; Ziegler, Matthias; Buhner, Markus
2011-01-01
Fit indices are widely used in order to test the model fit for structural equation models. In a highly influential study, Hu and Bentler (1999) showed that certain cutoff values for these indices could be derived, which, over time, has led to the reification of these suggested thresholds as "golden rules" for establishing the fit or other aspects…
Fischer, A; Friggens, N C; Berry, D P; Faverdin, P
2018-07-01
The ability to properly assess and accurately phenotype true differences in feed efficiency among dairy cows is key to the development of breeding programs for improving feed efficiency. The variability among individuals in feed efficiency is commonly characterised by the residual intake approach. Residual feed intake is represented by the residuals of a linear regression of intake on the corresponding quantities of the biological functions that consume (or release) energy. However, the residuals include both, model fitting and measurement errors as well as any variability in cow efficiency. The objective of this study was to isolate the individual animal variability in feed efficiency from the residual component. Two separate models were fitted, in one the standard residual energy intake (REI) was calculated as the residual of a multiple linear regression of lactation average net energy intake (NEI) on lactation average milk energy output, average metabolic BW, as well as lactation loss and gain of body condition score. In the other, a linear mixed model was used to simultaneously fit fixed linear regressions and random cow levels on the biological traits and intercept using fortnight repeated measures for the variables. This method split the predicted NEI in two parts: one quantifying the population mean intercept and coefficients, and one quantifying cow-specific deviations in the intercept and coefficients. The cow-specific part of predicted NEI was assumed to isolate true differences in feed efficiency among cows. NEI and associated energy expenditure phenotypes were available for the first 17 fortnights of lactation from 119 Holstein cows; all fed a constant energy-rich diet. Mixed models fitting cow-specific intercept and coefficients to different combinations of the aforementioned energy expenditure traits, calculated on a fortnightly basis, were compared. The variance of REI estimated with the lactation average model represented only 8% of the variance of measured NEI. Among all compared mixed models, the variance of the cow-specific part of predicted NEI represented between 53% and 59% of the variance of REI estimated from the lactation average model or between 4% and 5% of the variance of measured NEI. The remaining 41% to 47% of the variance of REI estimated with the lactation average model may therefore reflect model fitting errors or measurement errors. In conclusion, the use of a mixed model framework with cow-specific random regressions seems to be a promising method to isolate the cow-specific component of REI in dairy cows.
Nonexercise Equations to Estimate Fitness in White European and South Asian Men.
O'Donovan, Gary; Bakrania, Kishan; Ghouri, Nazim; Yates, Thomas; Gray, Laura J; Hamer, Mark; Stamatakis, Emmanuel; Khunti, Kamlesh; Davies, Melanie; Sattar, Naveed; Gill, Jason M R
2016-05-01
Cardiorespiratory fitness is a strong, independent predictor of health, whether it is measured in an exercise test or estimated in an equation. The purpose of this study was to develop and validate equations to estimate fitness in middle-age white European and South Asian men. Multiple linear regression models (n = 168, including 83 white European and 85 South Asian men) were created using variables that are thought to be important in predicting fitness (V˙O2max, mL·kg⁻¹·min⁻¹): age (yr), body mass index (kg·m⁻²), resting HR (bpm); smoking status (0, never smoked; 1, ex or current smoker), physical activity expressed as quintiles (0, quintile 1; 1, quintile 2; 2, quintile 3; 3, quintile 4; 4, quintile 5), categories of moderate- to-vigorous intensity physical activity (MVPA) (0, <75 min·wk⁻¹; 1, 75-150 min·wk⁻¹; 2, >150-225 min·wk⁻¹; 3, >225-300 min·wk⁻¹; 4, >300 min·wk⁻¹), or minutes of MVPA (min·wk⁻¹); and, ethnicity (0, South Asian; 1, white). The leave-one-out cross-validation procedure was used to assess the generalizability, and the bootstrap and jackknife resampling techniques were used to estimate the variance and bias of the models. Around 70% of the variance in fitness was explained in models with an ethnicity variable, such as: V˙O2max = 77.409 - (age × 0.374) - (body mass index × 0.906) - (ex or current smoker × 1.976) + (physical activity quintile coefficient) - (resting HR × 0.066) + (white ethnicity × 8.032), where physical activity quintile 1 is 0, 2 is 1.127, 3 is 1.869, 4 is 3.793, and 5 is 3.029. Only around 50% of the variance was explained in models without an ethnicity variable. All models with an ethnicity variable were generalizable and had low variance and bias. These data demonstrate the importance of incorporating ethnicity in nonexercise equations to estimate cardiorespiratory fitness in multiethnic populations.
Crow, James F
2008-12-01
Although molecular methods, such as QTL mapping, have revealed a number of loci with large effects, it is still likely that the bulk of quantitative variability is due to multiple factors, each with small effect. Typically, these have a large additive component. Conventional wisdom argues that selection, natural or artificial, uses up additive variance and thus depletes its supply. Over time, the variance should be reduced, and at equilibrium be near zero. This is especially expected for fitness and traits highly correlated with it. Yet, populations typically have a great deal of additive variance, and do not seem to run out of genetic variability even after many generations of directional selection. Long-term selection experiments show that populations continue to retain seemingly undiminished additive variance despite large changes in the mean value. I propose that there are several reasons for this. (i) The environment is continually changing so that what was formerly most fit no longer is. (ii) There is an input of genetic variance from mutation, and sometimes from migration. (iii) As intermediate-frequency alleles increase in frequency towards one, producing less variance (as p --> 1, p(1 - p) --> 0), others that were originally near zero become more common and increase the variance. Thus, a roughly constant variance is maintained. (iv) There is always selection for fitness and for characters closely related to it. To the extent that the trait is heritable, later generations inherit a disproportionate number of genes acting additively on the trait, thus increasing genetic variance. For these reasons a selected population retains its ability to evolve. Of course, genes with large effect are also important. Conspicuous examples are the small number of loci that changed teosinte to maize, and major phylogenetic changes in the animal kingdom. The relative importance of these along with duplications, chromosome rearrangements, horizontal transmission and polyploidy is yet to be determined. It is likely that only a case-by-case analysis will provide the answers. Despite the difficulties that complex interactions cause for evolution in Mendelian populations, such populations nevertheless evolve very well. Longlasting species must have evolved mechanisms for coping with such problems. Since such difficulties do not arise in asexual populations, a comparison of epistatic patterns in closely related sexual and asexual species might provide some important insights.
Griffith, Timothy; Sultan, Sonia E
2012-04-01
Factors promoting the evolution of specialists versus generalists have been little studied in ecological context. In a large-scale comparative field experiment, we studied genotypes from naturally evolved populations of a closely related generalist/specialist species pair (Polygonum persicaria and P. hydropiper), reciprocally transplanting replicates of multiple lines into open and partially shaded sites where the species naturally co-occur. We measured relative fitness, individual plasticity, herbivory, and genetic variance expressed in the contrasting light habitats at both low and high densities. Fitness data confirmed that the putative specialist out-performed the generalist in only one environment, the favorable full sun/low-density environment to which it is largely restricted in nature, while the generalist had higher lifetime reproduction in both canopy and dense neighbor shade. The generalist, P. persicaria, also expressed greater adaptive plasticity for biomass allocation and leaf size in shaded conditions than the specialist. We found no evidence that the ecological specialization of P. hydropiper reflects either genetically based fitness trade-offs or maintenance costs of plasticity, two types of genetic constraint often invoked to prevent the evolution of broadly adaptive genotypes. However, the patterns of fitness variance and herbivore damage revealed how release from herbivory in a new range can cause an introduced species to evolve as a specialist in that range, a surprising finding with important implications for invasion biology. Patterns of fitness variance between and within sites are also consistent with a possible role for the process of mutation accumulation (in this case, mutations affecting shade-expressed phenotypes) in the evolution and/or maintenance of specialization in P. hydropiper.
Griffith, Timothy; Sultan, Sonia E
2012-01-01
Factors promoting the evolution of specialists versus generalists have been little studied in ecological context. In a large-scale comparative field experiment, we studied genotypes from naturally evolved populations of a closely related generalist/specialist species pair (Polygonum persicaria and P. hydropiper), reciprocally transplanting replicates of multiple lines into open and partially shaded sites where the species naturally co-occur. We measured relative fitness, individual plasticity, herbivory, and genetic variance expressed in the contrasting light habitats at both low and high densities. Fitness data confirmed that the putative specialist out-performed the generalist in only one environment, the favorable full sun/low-density environment to which it is largely restricted in nature, while the generalist had higher lifetime reproduction in both canopy and dense neighbor shade. The generalist, P. persicaria, also expressed greater adaptive plasticity for biomass allocation and leaf size in shaded conditions than the specialist. We found no evidence that the ecological specialization of P. hydropiper reflects either genetically based fitness trade-offs or maintenance costs of plasticity, two types of genetic constraint often invoked to prevent the evolution of broadly adaptive genotypes. However, the patterns of fitness variance and herbivore damage revealed how release from herbivory in a new range can cause an introduced species to evolve as a specialist in that range, a surprising finding with important implications for invasion biology. Patterns of fitness variance between and within sites are also consistent with a possible role for the process of mutation accumulation (in this case, mutations affecting shade-expressed phenotypes) in the evolution and/or maintenance of specialization in P. hydropiper. PMID:22837826
Bohmanova, J; Miglior, F; Jamrozik, J; Misztal, I; Sullivan, P G
2008-09-01
A random regression model with both random and fixed regressions fitted by Legendre polynomials of order 4 was compared with 3 alternative models fitting linear splines with 4, 5, or 6 knots. The effects common for all models were a herd-test-date effect, fixed regressions on days in milk (DIM) nested within region-age-season of calving class, and random regressions for additive genetic and permanent environmental effects. Data were test-day milk, fat and protein yields, and SCS recorded from 5 to 365 DIM during the first 3 lactations of Canadian Holstein cows. A random sample of 50 herds consisting of 96,756 test-day records was generated to estimate variance components within a Bayesian framework via Gibbs sampling. Two sets of genetic evaluations were subsequently carried out to investigate performance of the 4 models. Models were compared by graphical inspection of variance functions, goodness of fit, error of prediction of breeding values, and stability of estimated breeding values. Models with splines gave lower estimates of variances at extremes of lactations than the model with Legendre polynomials. Differences among models in goodness of fit measured by percentages of squared bias, correlations between predicted and observed records, and residual variances were small. The deviance information criterion favored the spline model with 6 knots. Smaller error of prediction and higher stability of estimated breeding values were achieved by using spline models with 5 and 6 knots compared with the model with Legendre polynomials. In general, the spline model with 6 knots had the best overall performance based upon the considered model comparison criteria.
JaxoDraw: A graphical user interface for drawing Feynman diagrams
NASA Astrophysics Data System (ADS)
Binosi, D.; Theußl, L.
2004-08-01
JaxoDraw is a Feynman graph plotting tool written in Java. It has a complete graphical user interface that allows all actions to be carried out via mouse click-and-drag operations in a WYSIWYG fashion. Graphs may be exported to postscript/EPS format and can be saved in XML files to be used for later sessions. One of JaxoDraw's main features is the possibility to create ? code that may be used to generate graphics output, thus combining the powers of ? with those of a modern day drawing program. With JaxoDraw it becomes possible to draw even complicated Feynman diagrams with just a few mouse clicks, without the knowledge of any programming language. Program summaryTitle of program: JaxoDraw Catalogue identifier: ADUA Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUA Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar gzip file Operating system: Any Java-enabled platform, tested on Linux, Windows ME, XP, Mac OS X Programming language used: Java License: GPL Nature of problem: Existing methods for drawing Feynman diagrams usually require some 'hard-coding' in one or the other programming or scripting language. It is not very convenient and often time consuming, to generate relatively simple diagrams. Method of solution: A program is provided that allows for the interactive drawing of Feynman diagrams with a graphical user interface. The program is easy to learn and use, produces high quality output in several formats and runs on any operating system where a Java Runtime Environment is available. Number of bytes in distributed program, including test data: 2 117 863 Number of lines in distributed program, including test data: 60 000 Restrictions: Certain operations (like internal latex compilation, Postscript preview) require the execution of external commands that might not work on untested operating systems. Typical running time: As an interactive program, the running time depends on the complexity of the diagram to be drawn.
Physical characteristics that predict involvement with the ball in recreational youth soccer.
Ré, Alessandro H Nicolai; Cattuzzo, Maria Teresa; Henrique, Rafael Dos Santos; Stodden, David F
2016-09-01
This study examined the relative contribution of age, stage of puberty, anthropometric characteristics, health-related fitness, soccer-specific tests and match-related technical performance to variance in involvements with the ball during recreational 5-a-side small-sided (32 × 15 m) soccer matches. Using a cross-sectional design, 80 healthy male students (14.6 ± 0.5 years of age; range 13.6-15.4) who played soccer recreationally were randomly divided into 10 teams and played against each other. Measurements included height, body mass, pubertal status, health-related fitness (12-min walk/run test, standing long jump, 15-m sprint and sit-ups in 30 s), soccer-specific tests (kicking for speed, passing for accuracy and agility run with and without a ball), match-related technical performance (kicks, passes and dribbles) and involvements with the ball during matches. Forward multiple regression analysis revealed that cardiorespiratory fitness (12-min walk/run test) accounted for 36% of the variance in involvements with the ball. When agility with the ball (zigzag running) and power (standing long jump) were included among the predictors, the total explained variance increased to 62%. In conclusion, recreational adolescent players, regardless of their soccer-specific skills, may increase participation in soccer matches most through physical activities that promote improvement in cardiorespiratory fitness, muscle power and agility.
Atomic Manipulation on Metal Surfaces
NASA Astrophysics Data System (ADS)
Ternes, Markus; Lutz, Christopher P.; Heinrich, Andreas J.
Half a century ago, Nobel Laureate Richard Feynman asked in a now-famous lecture what would happen if we could precisely position individual atoms at will [R.P. Feynman, Eng. Sci. 23, 22 (1960)]. This dream became a reality some 30 years later when Eigler and Schweizer were the first to position individual Xe atoms at will with the probe tip of a low-temperature scanning tunneling microscope (STM) on a Ni surface [D.M. Eigler, E.K. Schweizer, Nature 344, 524 (1990)].
System level analysis and control of manufacturing process variation
Hamada, Michael S.; Martz, Harry F.; Eleswarpu, Jay K.; Preissler, Michael J.
2005-05-31
A computer-implemented method is implemented for determining the variability of a manufacturing system having a plurality of subsystems. Each subsystem of the plurality of subsystems is characterized by signal factors, noise factors, control factors, and an output response, all having mean and variance values. Response models are then fitted to each subsystem to determine unknown coefficients for use in the response models that characterize the relationship between the signal factors, noise factors, control factors, and the corresponding output response having mean and variance values that are related to the signal factors, noise factors, and control factors. The response models for each subsystem are coupled to model the output of the manufacturing system as a whole. The coefficients of the fitted response models are randomly varied to propagate variances through the plurality of subsystems and values of signal factors and control factors are found to optimize the output of the manufacturing system to meet a specified criterion.
Genomic Prediction Accounting for Residual Heteroskedasticity.
Ou, Zhining; Tempelman, Robert J; Steibel, Juan P; Ernst, Catherine W; Bates, Ronald O; Bello, Nora M
2015-11-12
Whole-genome prediction (WGP) models that use single-nucleotide polymorphism marker information to predict genetic merit of animals and plants typically assume homogeneous residual variance. However, variability is often heterogeneous across agricultural production systems and may subsequently bias WGP-based inferences. This study extends classical WGP models based on normality, heavy-tailed specifications and variable selection to explicitly account for environmentally-driven residual heteroskedasticity under a hierarchical Bayesian mixed-models framework. WGP models assuming homogeneous or heterogeneous residual variances were fitted to training data generated under simulation scenarios reflecting a gradient of increasing heteroskedasticity. Model fit was based on pseudo-Bayes factors and also on prediction accuracy of genomic breeding values computed on a validation data subset one generation removed from the simulated training dataset. Homogeneous vs. heterogeneous residual variance WGP models were also fitted to two quantitative traits, namely 45-min postmortem carcass temperature and loin muscle pH, recorded in a swine resource population dataset prescreened for high and mild residual heteroskedasticity, respectively. Fit of competing WGP models was compared using pseudo-Bayes factors. Predictive ability, defined as the correlation between predicted and observed phenotypes in validation sets of a five-fold cross-validation was also computed. Heteroskedastic error WGP models showed improved model fit and enhanced prediction accuracy compared to homoskedastic error WGP models although the magnitude of the improvement was small (less than two percentage points net gain in prediction accuracy). Nevertheless, accounting for residual heteroskedasticity did improve accuracy of selection, especially on individuals of extreme genetic merit. Copyright © 2016 Ou et al.
Saunders, Christina T; Blume, Jeffrey D
2017-10-26
Mediation analysis explores the degree to which an exposure's effect on an outcome is diverted through a mediating variable. We describe a classical regression framework for conducting mediation analyses in which estimates of causal mediation effects and their variance are obtained from the fit of a single regression model. The vector of changes in exposure pathway coefficients, which we named the essential mediation components (EMCs), is used to estimate standard causal mediation effects. Because these effects are often simple functions of the EMCs, an analytical expression for their model-based variance follows directly. Given this formula, it is instructive to revisit the performance of routinely used variance approximations (e.g., delta method and resampling methods). Requiring the fit of only one model reduces the computation time required for complex mediation analyses and permits the use of a rich suite of regression tools that are not easily implemented on a system of three equations, as would be required in the Baron-Kenny framework. Using data from the BRAIN-ICU study, we provide examples to illustrate the advantages of this framework and compare it with the existing approaches. © The Author 2017. Published by Oxford University Press.
Using Structural Equation Modeling To Fit Models Incorporating Principal Components.
ERIC Educational Resources Information Center
Dolan, Conor; Bechger, Timo; Molenaar, Peter
1999-01-01
Considers models incorporating principal components from the perspectives of structural-equation modeling. These models include the following: (1) the principal-component analysis of patterned matrices; (2) multiple analysis of variance based on principal components; and (3) multigroup principal-components analysis. Discusses fitting these models…
Evaluating Feynman integrals by the hypergeometry
NASA Astrophysics Data System (ADS)
Feng, Tai-Fu; Chang, Chao-Hsi; Chen, Jian-Bin; Gu, Zhi-Hua; Zhang, Hai-Bin
2018-02-01
The hypergeometric function method naturally provides the analytic expressions of scalar integrals from concerned Feynman diagrams in some connected regions of independent kinematic variables, also presents the systems of homogeneous linear partial differential equations satisfied by the corresponding scalar integrals. Taking examples of the one-loop B0 and massless C0 functions, as well as the scalar integrals of two-loop vacuum and sunset diagrams, we verify our expressions coinciding with the well-known results of literatures. Based on the multiple hypergeometric functions of independent kinematic variables, the systems of homogeneous linear partial differential equations satisfied by the mentioned scalar integrals are established. Using the calculus of variations, one recognizes the system of linear partial differential equations as stationary conditions of a functional under some given restrictions, which is the cornerstone to perform the continuation of the scalar integrals to whole kinematic domains numerically with the finite element methods. In principle this method can be used to evaluate the scalar integrals of any Feynman diagrams.
Smith, Kyle K G; Poulsen, Jens Aage; Nyman, Gunnar; Rossky, Peter J
2015-06-28
We develop two classes of quasi-classical dynamics that are shown to conserve the initial quantum ensemble when used in combination with the Feynman-Kleinert approximation of the density operator. These dynamics are used to improve the Feynman-Kleinert implementation of the classical Wigner approximation for the evaluation of quantum time correlation functions known as Feynman-Kleinert linearized path-integral. As shown, both classes of dynamics are able to recover the exact classical and high temperature limits of the quantum time correlation function, while a subset is able to recover the exact harmonic limit. A comparison of the approximate quantum time correlation functions obtained from both classes of dynamics is made with the exact results for the challenging model problems of the quartic and double-well potentials. It is found that these dynamics provide a great improvement over the classical Wigner approximation, in which purely classical dynamics are used. In a special case, our first method becomes identical to centroid molecular dynamics.
New Variance-Reducing Methods for the PSD Analysis of Large Optical Surfaces
NASA Technical Reports Server (NTRS)
Sidick, Erkin
2010-01-01
Edge data of a measured surface map of a circular optic result in large variance or "spectral leakage" behavior in the corresponding Power Spectral Density (PSD) data. In this paper we present two new, alternative methods for reducing such variance in the PSD data by replacing the zeros outside the circular area of a surface map by non-zero values either obtained from a PSD fit (method 1) or taken from the inside of the circular area (method 2).
[Person-organization fit and work ability].
Merecz, Dorota; Andysz, Aleksandra
2011-01-01
Person-environment issue has long been in focus of researchers who explore the area of human labor. It is known that the level of fit is a predictor of many phenomena related to health and attitude to work. The aim of this study was to explore the association between the level of person- organization fit (P-O fit) and work ability, including indicators of somatic and mental health. Research was conducted on a representative sample of 600 Polish men and women at working age. The Person-Organization Fit Questionnaire was used to assess three dimensions of P-O fit (supplementary fit, complementary fit and identification with organization); mental health status was measured by GHQ-28; the number of diagnosed diseases was taken as an index of somatic health; work ability, ability to physical and mental efforts were measured by three items from the Work Ability Index. A significant relationship between P-O fit level and work ability was found. In men, work ability predictors were: age, supplementary fit and mental health status, which explained 25% of the variance in work ability. In women, work ability predictors were: the number of diagnosed somatic diseases, supplementary fit, age and complementary fit, which explained 27% of the variance in work ability. Some gender-related differences in the predictive value of variables under the study were also found. The results of this study indicate the importance of P-O fit in shaping the sense of work ability, a recognized predictor of workers' occupational activity and the frequency of taking sick leave in subsequent years. Therefore, this result may be a useful argument to motivate employers to employ workers adequately to their abilities and preferences.
Hur, Y-M; Kaprio, J; Iacono, W G; Boomsma, D I; McGue, M; Silventoinen, K; Martin, N G; Luciano, M; Visscher, P M; Rose, R J; He, M; Ando, J; Ooki, S; Nonaka, K; Lin, C C H; Lajunen, H R; Cornes, B K; Bartels, M; van Beijsterveldt, C E M; Cherny, S S; Mitchell, K
2008-10-01
Twin studies are useful for investigating the causes of trait variation between as well as within a population. The goals of the present study were two-fold: First, we aimed to compare the total phenotypic, genetic and environmental variances of height, weight and BMI between Caucasians and East Asians using twins. Secondly, we intended to estimate the extent to which genetic and environmental factors contribute to differences in variability of height, weight and BMI between Caucasians and East Asians. Height and weight data from 3735 Caucasian and 1584 East Asian twin pairs (age: 13-15 years) from Australia, China, Finland, Japan, the Netherlands, South Korea, Taiwan and the United States were used for analyses. Maximum likelihood twin correlations and variance components model-fitting analyses were conducted to fulfill the goals of the present study. The absolute genetic variances for height, weight and BMI were consistently greater in Caucasians than in East Asians with corresponding differences in total variances for all three body measures. In all 80 to 100% of the differences in total variances of height, weight and BMI between the two population groups were associated with genetic differences. Height, weight and BMI were more variable in Caucasian than in East Asian adolescents. Genetic variances for these three body measures were also larger in Caucasians than in East Asians. Variance components model-fitting analyses indicated that genetic factors contributed to the difference in variability of height, weight and BMI between the two population groups. Association studies for these body measures should take account of our findings of differences in genetic variances between the two population groups.
Hur, Y-M; Kaprio, J; Iacono, WG; Boomsma, DI; McGue, M; Silventoinen, K; Martin, NG; Luciano, M; Visscher, PM; Rose, RJ; He, M; Ando, J; Ooki, S; Nonaka, K; Lin, CCH; Lajunen, HR; Cornes, BK; Bartels, M; van Beijsterveldt, CEM; Cherny, SS; Mitchell, K
2008-01-01
Objective Twin studies are useful for investigating the causes of trait variation between as well as within a population. The goals of the present study were two-fold: First, we aimed to compare the total phenotypic, genetic and environmental variances of height, weight and BMI between Caucasians and East Asians using twins. Secondly, we intended to estimate the extent to which genetic and environmental factors contribute to differences in variability of height, weight and BMI between Caucasians and East Asians. Design Height and weight data from 3735 Caucasian and 1584 East Asian twin pairs (age: 13–15 years) from Australia, China, Finland, Japan, the Netherlands, South Korea, Taiwan and the United States were used for analyses. Maximum likelihood twin correlations and variance components model-fitting analyses were conducted to fulfill the goals of the present study. Results The absolute genetic variances for height, weight and BMI were consistently greater in Caucasians than in East Asians with corresponding differences in total variances for all three body measures. In all 80 to 100% of the differences in total variances of height, weight and BMI between the two population groups were associated with genetic differences. Conclusion Height, weight and BMI were more variable in Caucasian than in East Asian adolescents. Genetic variances for these three body measures were also larger in Caucasians than in East Asians. Variance components model-fitting analyses indicated that genetic factors contributed to the difference in variability of height, weight and BMI between the two population groups. Association studies for these body measures should take account of our findings of differences in genetic variances between the two population groups. PMID:18779828
Stiglbauer, Barbara; Kovacs, Carrie
2017-12-28
In organizational psychology research, autonomy is generally seen as a job resource with a monotone positive relationship with desired occupational outcomes such as well-being. However, both Warr's vitamin model and person-environment (PE) fit theory suggest that negative outcomes may result from excesses of some job resources, including autonomy. Thus, the current studies used survey methodology to explore cross-sectional relationships between environmental autonomy, person-environment autonomy (mis)fit, and well-being. We found that autonomy and autonomy (mis)fit explained between 6% and 22% of variance in well-being, depending on type of autonomy (scheduling, method, or decision-making) and type of (mis)fit operationalization (atomistic operationalization through the separate assessment of actual and ideal autonomy levels vs. molecular operationalization through the direct assessment of perceived autonomy (mis)fit). Autonomy (mis)fit (PE-fit perspective) explained more unique variance in well-being than environmental autonomy itself (vitamin model perspective). Detrimental effects of autonomy excess on well-being were most evident for method autonomy and least consistent for decision-making autonomy. We argue that too-much-of-a-good-thing effects of job autonomy on well-being exist, but suggest that these may be dependent upon sample characteristics (range of autonomy levels), type of operationalization (molecular vs. atomistic fit), autonomy facet (method, scheduling, or decision-making), as well as individual and organizational moderators. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Why "suboptimal" is optimal: Jensen's inequality and ectotherm thermal preferences.
Martin, Tara Laine; Huey, Raymond B
2008-03-01
Body temperature (T(b)) profoundly affects the fitness of ectotherms. Many ectotherms use behavior to control T(b) within narrow levels. These temperatures are assumed to be optimal and therefore to match body temperatures (Trmax) that maximize fitness (r). We develop an optimality model and find that optimal body temperature (T(o)) should not be centered at Trmax but shifted to a lower temperature. This finding seems paradoxical but results from two considerations relating to Jensen's inequality, which deals with how variance and skew influence integrals of nonlinear functions. First, ectotherms are not perfect thermoregulators and so experience a range of T(b). Second, temperature-fitness curves are asymmetric, such that a T(b) higher than Trmax depresses fitness more than will a T(b) displaced an equivalent amount below Trmax. Our model makes several predictions. The magnitude of the optimal shift (Trmax - To) should increase with the degree of asymmetry of temperature-fitness curves and with T(b) variance. Deviations should be relatively large for thermal specialists but insensitive to whether fitness increases with Trmax ("hotter is better"). Asymmetric (left-skewed) T(b) distributions reduce the magnitude of the optimal shift but do not eliminate it. Comparative data (insects, lizards) support key predictions. Thus, "suboptimal" is optimal.
Gonzalez-Mulé, Erik; DeGeest, David S; McCormick, Brian W; Seong, Jee Young; Brown, Kenneth G
2014-09-01
Drawing on the group-norms theory of organizational citizenship behaviors and person-environment fit theory, we introduce and test a multilevel model of the effects of additive and dispersion composition models of team members' personality characteristics on group norms and individual helping behaviors. Our model was tested using regression and random coefficients modeling on 102 research and development teams. Results indicated that high mean levels of extraversion are positively related to individual helping behaviors through the mediating effect of cooperative group norms. Further, low variance on agreeableness (supplementary fit) and high variance on extraversion (complementary fit) promote the enactment of individual helping behaviors, but only the effects of extraversion were mediated by cooperative group norms. Implications of these findings for theories of helping behaviors in teams are discussed. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Identifying the Source of Misfit in Item Response Theory Models.
Liu, Yang; Maydeu-Olivares, Alberto
2014-01-01
When an item response theory model fails to fit adequately, the items for which the model provides a good fit and those for which it does not must be determined. To this end, we compare the performance of several fit statistics for item pairs with known asymptotic distributions under maximum likelihood estimation of the item parameters: (a) a mean and variance adjustment to bivariate Pearson's X(2), (b) a bivariate subtable analog to Reiser's (1996) overall goodness-of-fit test, (c) a z statistic for the bivariate residual cross product, and (d) Maydeu-Olivares and Joe's (2006) M2 statistic applied to bivariate subtables. The unadjusted Pearson's X(2) with heuristically determined degrees of freedom is also included in the comparison. For binary and ordinal data, our simulation results suggest that the z statistic has the best Type I error and power behavior among all the statistics under investigation when the observed information matrix is used in its computation. However, if one has to use the cross-product information, the mean and variance adjusted X(2) is recommended. We illustrate the use of pairwise fit statistics in 2 real-data examples and discuss possible extensions of the current research in various directions.
Beyond promiscuity: mate-choice commitments in social breeding
Boomsma, Jacobus J.
2013-01-01
Obligate eusociality with distinct caste phenotypes has evolved from strictly monogamous sub-social ancestors in ants, some bees, some wasps and some termites. This implies that no lineage reached the most advanced form of social breeding, unless helpers at the nest gained indirect fitness values via siblings that were identical to direct fitness via offspring. The complete lack of re-mating promiscuity equalizes sex-specific variances in reproductive success. Later, evolutionary developments towards multiple queen-mating retained lifetime commitment between sexual partners, but reduced male variance in reproductive success relative to female's, similar to the most advanced vertebrate cooperative breeders. Here, I (i) discuss some of the unique and highly peculiar mating system adaptations of eusocial insects; (ii) address ambiguities that remained after earlier reviews and extend the monogamy logic to the evolution of soldier castes; (iii) evaluate the evidence for indirect fitness benefits driving the dynamics of (in)vertebrate cooperative breeding, while emphasizing the fundamental differences between obligate eusociality and cooperative breeding; (iv) infer that lifetime commitment is a major driver towards higher levels of organization in bodies, colonies and mutualisms. I argue that evolutionary informative definitions of social systems that separate direct and indirect fitness benefits facilitate transparency when testing inclusive fitness theory. PMID:23339241
Renormalized asymptotic enumeration of Feynman diagrams
NASA Astrophysics Data System (ADS)
Borinsky, Michael
2017-10-01
A method to obtain all-order asymptotic results for the coefficients of perturbative expansions in zero-dimensional quantum field is described. The focus is on the enumeration of the number of skeleton or primitive diagrams of a certain QFT and its asymptotics. The procedure heavily applies techniques from singularity analysis. To utilize singularity analysis, a representation of the zero-dimensional path integral as a generalized hyperelliptic curve is deduced. As applications the full asymptotic expansions of the number of disconnected, connected, 1PI and skeleton Feynman diagrams in various theories are given.
Cho, C. I.; Alam, M.; Choi, T. J.; Choy, Y. H.; Choi, J. G.; Lee, S. S.; Cho, K. H.
2016-01-01
The objectives of the study were to estimate genetic parameters for milk production traits of Holstein cattle using random regression models (RRMs), and to compare the goodness of fit of various RRMs with homogeneous and heterogeneous residual variances. A total of 126,980 test-day milk production records of the first parity Holstein cows between 2007 and 2014 from the Dairy Cattle Improvement Center of National Agricultural Cooperative Federation in South Korea were used. These records included milk yield (MILK), fat yield (FAT), protein yield (PROT), and solids-not-fat yield (SNF). The statistical models included random effects of genetic and permanent environments using Legendre polynomials (LP) of the third to fifth order (L3–L5), fixed effects of herd-test day, year-season at calving, and a fixed regression for the test-day record (third to fifth order). The residual variances in the models were either homogeneous (HOM) or heterogeneous (15 classes, HET15; 60 classes, HET60). A total of nine models (3 orders of polynomials×3 types of residual variance) including L3-HOM, L3-HET15, L3-HET60, L4-HOM, L4-HET15, L4-HET60, L5-HOM, L5-HET15, and L5-HET60 were compared using Akaike information criteria (AIC) and/or Schwarz Bayesian information criteria (BIC) statistics to identify the model(s) of best fit for their respective traits. The lowest BIC value was observed for the models L5-HET15 (MILK; PROT; SNF) and L4-HET15 (FAT), which fit the best. In general, the BIC values of HET15 models for a particular polynomial order was lower than that of the HET60 model in most cases. This implies that the orders of LP and types of residual variances affect the goodness of models. Also, the heterogeneity of residual variances should be considered for the test-day analysis. The heritability estimates of from the best fitted models ranged from 0.08 to 0.15 for MILK, 0.06 to 0.14 for FAT, 0.08 to 0.12 for PROT, and 0.07 to 0.13 for SNF according to days in milk of first lactation. Genetic variances for studied traits tended to decrease during the earlier stages of lactation, which were followed by increases in the middle and decreases further at the end of lactation. With regards to the fitness of the models and the differential genetic parameters across the lactation stages, we could estimate genetic parameters more accurately from RRMs than from lactation models. Therefore, we suggest using RRMs in place of lactation models to make national dairy cattle genetic evaluations for milk production traits in Korea. PMID:26954184
Cho, C I; Alam, M; Choi, T J; Choy, Y H; Choi, J G; Lee, S S; Cho, K H
2016-05-01
The objectives of the study were to estimate genetic parameters for milk production traits of Holstein cattle using random regression models (RRMs), and to compare the goodness of fit of various RRMs with homogeneous and heterogeneous residual variances. A total of 126,980 test-day milk production records of the first parity Holstein cows between 2007 and 2014 from the Dairy Cattle Improvement Center of National Agricultural Cooperative Federation in South Korea were used. These records included milk yield (MILK), fat yield (FAT), protein yield (PROT), and solids-not-fat yield (SNF). The statistical models included random effects of genetic and permanent environments using Legendre polynomials (LP) of the third to fifth order (L3-L5), fixed effects of herd-test day, year-season at calving, and a fixed regression for the test-day record (third to fifth order). The residual variances in the models were either homogeneous (HOM) or heterogeneous (15 classes, HET15; 60 classes, HET60). A total of nine models (3 orders of polynomials×3 types of residual variance) including L3-HOM, L3-HET15, L3-HET60, L4-HOM, L4-HET15, L4-HET60, L5-HOM, L5-HET15, and L5-HET60 were compared using Akaike information criteria (AIC) and/or Schwarz Bayesian information criteria (BIC) statistics to identify the model(s) of best fit for their respective traits. The lowest BIC value was observed for the models L5-HET15 (MILK; PROT; SNF) and L4-HET15 (FAT), which fit the best. In general, the BIC values of HET15 models for a particular polynomial order was lower than that of the HET60 model in most cases. This implies that the orders of LP and types of residual variances affect the goodness of models. Also, the heterogeneity of residual variances should be considered for the test-day analysis. The heritability estimates of from the best fitted models ranged from 0.08 to 0.15 for MILK, 0.06 to 0.14 for FAT, 0.08 to 0.12 for PROT, and 0.07 to 0.13 for SNF according to days in milk of first lactation. Genetic variances for studied traits tended to decrease during the earlier stages of lactation, which were followed by increases in the middle and decreases further at the end of lactation. With regards to the fitness of the models and the differential genetic parameters across the lactation stages, we could estimate genetic parameters more accurately from RRMs than from lactation models. Therefore, we suggest using RRMs in place of lactation models to make national dairy cattle genetic evaluations for milk production traits in Korea.
Pereira, R J; Bignardi, A B; El Faro, L; Verneque, R S; Vercesi Filho, A E; Albuquerque, L G
2013-01-01
Studies investigating the use of random regression models for genetic evaluation of milk production in Zebu cattle are scarce. In this study, 59,744 test-day milk yield records from 7,810 first lactations of purebred dairy Gyr (Bos indicus) and crossbred (dairy Gyr × Holstein) cows were used to compare random regression models in which additive genetic and permanent environmental effects were modeled using orthogonal Legendre polynomials or linear spline functions. Residual variances were modeled considering 1, 5, or 10 classes of days in milk. Five classes fitted the changes in residual variances over the lactation adequately and were used for model comparison. The model that fitted linear spline functions with 6 knots provided the lowest sum of residual variances across lactation. On the other hand, according to the deviance information criterion (DIC) and bayesian information criterion (BIC), a model using third-order and fourth-order Legendre polynomials for additive genetic and permanent environmental effects, respectively, provided the best fit. However, the high rank correlation (0.998) between this model and that applying third-order Legendre polynomials for additive genetic and permanent environmental effects, indicates that, in practice, the same bulls would be selected by both models. The last model, which is less parameterized, is a parsimonious option for fitting dairy Gyr breed test-day milk yield records. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
The relationship between quality of life and physical fitness in people with severe mental illness.
Perez-Cruzado, D; Cuesta-Vargas, A I; Vera-Garcia, E; Mayoral-Cleries, F
2018-05-02
Quality of life of people with severe mental illness may be decrease by the high occurrence of metabolic and cardiovascular diseases. Physical fitness emerges as a modifying factor in this population through physical activity and this modification could influence in the quality of life of this population. The aim of the present study is to determine the contribution of physical fitness to the quality of life of people with severe mental illness. In the current study, a physiotherapist and an occupational therapist assessed 62 people with severe mental illness. Physical fitness was measured with a range of 11 fitness tests that covered flexibility, strength, balance, and endurance. To assess quality of life the EQ-5D-3 L scale was used, which measures five dimensions (mobility, self-care, usual activities, pain-discomfort, and anxiety-depression). Significant correlations are presented between the quality of life and primary variables of physical fitness (balance, endurance, and upper limb strength). Endurance explained 22.9% of the variance of the quality of life in people with severe mental illness. Functional reach added another 36.2% variance to the prediction of quality of life. The results of the present study suggest that some variables of physical fitness are associated with quality of life in people with severe mental illness. The improvement in physical fitness of this population should be a primary objective. ClinicalTrials.gov Identifier: NCT02413164 "retrospective registered" Registered Febr 2017.
Forward-Backward Emission of Target Evaporated Fragments at High Energy Nucleus-Nucleus Collisions
NASA Astrophysics Data System (ADS)
Zhang, Zhi; Ma, Tian-Li; Zhang, Dong-Hai
The multiplicity distribution, multiplicity moments, scaled variance and entropy of target evaporated fragment emitted in forward and backward hemispheres in relativistic heavy ions induced emulsion heavy targets (AgBr) interactions are investigated. It is found that the multiplicity distribution can be fitted by the Gaussian distribution, and the fitting parameters are different between two hemispheres for all the interactions. The multiplicity moment increases with the order of the moment q, and second-order multiplicity moment is energy independent over the entire energy for all the interactions. The scaled variance is close to one for all the interactions. The entropy in forward hemisphere is greater than that in backward hemisphere for all the interactions.
Pleiotropic Models of Polygenic Variation, Stabilizing Selection, and Epistasis
Gavrilets, S.; de-Jong, G.
1993-01-01
We show that in polymorphic populations many polygenic traits pleiotropically related to fitness are expected to be under apparent ``stabilizing selection'' independently of the real selection acting on the population. This occurs, for example, if the genetic system is at a stable polymorphic equilibrium determined by selection and the nonadditive contributions of the loci to the trait value either are absent, or are random and independent of those to fitness. Stabilizing selection is also observed if the polygenic system is at an equilibrium determined by a balance between selection and mutation (or migration) when both additive and nonadditive contributions of the loci to the trait value are random and independent of those to fitness. We also compare different viability models that can maintain genetic variability at many loci with respect to their ability to account for the strong stabilizing selection on an additive trait. Let V(m) be the genetic variance supplied by mutation (or migration) each generation, V(g) be the genotypic variance maintained in the population, and n be the number of the loci influencing fitness. We demonstrate that in mutation (migration)-selection balance models the strength of apparent stabilizing selection is order V(m)/V(g). In the overdominant model and in the symmetric viability model the strength of apparent stabilizing selection is approximately 1/(2n) that of total selection on the whole phenotype. We show that a selection system that involves pairwise additive by additive epistasis in maintaining variability can lead to a lower genetic load and genetic variance in fitness (approximately 1/(2n) times) than an equivalent selection system that involves overdominance. We show that, in the epistatic model, the apparent stabilizing selection on an additive trait can be as strong as the total selection on the whole phenotype. PMID:8325491
ERIC Educational Resources Information Center
Feldt, Ronald C.; Graham, Melody; Dew, Dennis
2011-01-01
This study employed confirmatory factor analysis to examine the quality of fit of two measurement models of the Student Adaptation to College Questionnaire (N = 305). Following the observation of poor fit, exploratory factor analysis was used. Results indicated six factors that account for the variance in Student Adaptation to College…
Zheng, Hanrong; Fang, Zujie; Wang, Zhaoyong; Lu, Bin; Cao, Yulong; Ye, Qing; Qu, Ronghui; Cai, Haiwen
2018-01-31
It is a basic task in Brillouin distributed fiber sensors to extract the peak frequency of the scattering spectrum, since the peak frequency shift gives information on the fiber temperature and strain changes. Because of high-level noise, quadratic fitting is often used in the data processing. Formulas of the dependence of the minimum detectable Brillouin frequency shift (BFS) on the signal-to-noise ratio (SNR) and frequency step have been presented in publications, but in different expressions. A detailed deduction of new formulas of BFS variance and its average is given in this paper, showing especially their dependences on the data range used in fitting, including its length and its center respective to the real spectral peak. The theoretical analyses are experimentally verified. It is shown that the center of the data range has a direct impact on the accuracy of the extracted BFS. We propose and demonstrate an iterative fitting method to mitigate such effects and improve the accuracy of BFS measurement. The different expressions of BFS variances presented in previous papers are explained and discussed.
Eberhard, Wynn L
2017-04-01
The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.
The Pricing of European Options Under the Constant Elasticity of Variance with Stochastic Volatility
NASA Astrophysics Data System (ADS)
Bock, Bounghun; Choi, Sun-Yong; Kim, Jeong-Hoon
This paper considers a hybrid risky asset price model given by a constant elasticity of variance multiplied by a stochastic volatility factor. A multiscale analysis leads to an asymptotic pricing formula for both European vanilla option and a Barrier option near the zero elasticity of variance. The accuracy of the approximation is provided in a rigorous manner. A numerical experiment for implied volatilities shows that the hybrid model improves some of the well-known models in view of fitting the data for different maturities.
Drury, Douglas W.; Wade, Michael J.
2010-01-01
Hybrids from crosses between populations of the flour beetle, Tribolium castaneum, express varying degrees of inviability and morphological abnormalities. The proportion of allopatric population hybrids exhibiting these negative hybrid phenotypes varies widely, from 3% to 100%, depending upon the pair of populations crossed. We crossed three populations and measured two fitness components, fertility and adult offspring numbers from successful crosses, to determine how genes segregating within populations interact in inter-population hybrids to cause the negative phenotypes. With data from crosses of 40 sires from each of three populations to groups of 5 dams from their own and two divergent populations, we estimated the genetic variance and covariance for breeding value of fitness between the intra- and inter-population backgrounds and the sire × dam-population interaction variance. The latter component of the variance in breeding values estimates the change in genic effects between backgrounds owing to epistasis. Interacting genes with a positive effect, prior to fixation, in the sympatric background but a negative effect in the hybrid background cause reproductive incompatibility in the Dobzhansky-Muller speciation model. Thus, the sire × dam-population interaction provides a way to measure the progress toward speciation of genetically differentiating populations on a trait by trait basis using inter-population hybrids. PMID:21044199
Handling nonnormality and variance heterogeneity for quantitative sublethal toxicity tests.
Ritz, Christian; Van der Vliet, Leana
2009-09-01
The advantages of using regression-based techniques to derive endpoints from environmental toxicity data are clear, and slowly, this superior analytical technique is gaining acceptance. As use of regression-based analysis becomes more widespread, some of the associated nuances and potential problems come into sharper focus. Looking at data sets that cover a broad spectrum of standard test species, we noticed that some model fits to data failed to meet two key assumptions-variance homogeneity and normality-that are necessary for correct statistical analysis via regression-based techniques. Failure to meet these assumptions often is caused by reduced variance at the concentrations showing severe adverse effects. Although commonly used with linear regression analysis, transformation of the response variable only is not appropriate when fitting data using nonlinear regression techniques. Through analysis of sample data sets, including Lemna minor, Eisenia andrei (terrestrial earthworm), and algae, we show that both the so-called Box-Cox transformation and use of the Poisson distribution can help to correct variance heterogeneity and nonnormality and so allow nonlinear regression analysis to be implemented. Both the Box-Cox transformation and the Poisson distribution can be readily implemented into existing protocols for statistical analysis. By correcting for nonnormality and variance heterogeneity, these two statistical tools can be used to encourage the transition to regression-based analysis and the depreciation of less-desirable and less-flexible analytical techniques, such as linear interpolation.
Deletion Diagnostics for the Generalised Linear Mixed Model with independent random effects
Ganguli, B.; Roy, S. Sen; Naskar, M.; Malloy, E. J.; Eisen, E. A.
2015-01-01
The Generalised Linear Mixed Model (GLMM) is widely used for modelling environmental data. However, such data are prone to influential observations which can distort the estimated exposure-response curve particularly in regions of high exposure. Deletion diagnostics for iterative estimation schemes commonly derive the deleted estimates based on a single iteration of the full system holding certain pivotal quantities such as the information matrix to be constant. In this paper, we present an approximate formula for the deleted estimates and Cook’s distance for the GLMM which does not assume that the estimates of variance parameters are unaffected by deletion. The procedure allows the user to calculate standardised DFBETAs for mean as well as variance parameters. In certain cases, such as when using the GLMM as a device for smoothing, such residuals for the variance parameters are interesting in their own right. In general, the procedure leads to deleted estimates of mean parameters which are corrected for the effect of deletion on variance components as estimation of the two sets of parameters is interdependent. The probabilistic behaviour of these residuals is investigated and a simulation based procedure suggested for their standardisation. The method is used to identify influential individuals in an occupational cohort exposed to silica. The results show that failure to conduct post model fitting diagnostics for variance components can lead to erroneous conclusions about the fitted curve and unstable confidence intervals. PMID:26626135
Advances in the computation of the Sjöstrand, Rossi, and Feynman distributions
Talamo, A.; Gohar, Y.; Gabrielli, F.; ...
2017-02-01
This study illustrates recent computational advances in the application of the Sjöstrand (area), Rossi, and Feynman methods to estimate the effective multiplication factor of a subcritical system driven by an external neutron source. The methodologies introduced in this study have been validated with the experimental results from the KUKA facility of Japan by Monte Carlo (MCNP6 and MCNPX) and deterministic (ERANOS, VARIANT, and PARTISN) codes. When the assembly is driven by a pulsed neutron source generated by a particle accelerator and delayed neutrons are at equilibrium, the Sjöstrand method becomes extremely fast if the integral of the reaction rate frommore » a single pulse is split into two parts. These two integrals distinguish between the neutron counts during and after the pulse period. To conclude, when the facility is driven by a spontaneous fission neutron source, the timestamps of the detector neutron counts can be obtained up to the nanosecond precision using MCNP6, which allows obtaining the Rossi and Feynman distributions.« less
Infinities in Quantum Field Theory and in Classical Computing: Renormalization Program
NASA Astrophysics Data System (ADS)
Manin, Yuri I.
Introduction. The main observable quantities in Quantum Field Theory, correlation functions, are expressed by the celebrated Feynman path integrals. A mathematical definition of them involving a measure and actual integration is still lacking. Instead, it is replaced by a series of ad hoc but highly efficient and suggestive heuristic formulas such as perturbation formalism. The latter interprets such an integral as a formal series of finite-dimensional but divergent integrals, indexed by Feynman graphs, the list of which is determined by the Lagrangian of the theory. Renormalization is a prescription that allows one to systematically "subtract infinities" from these divergent terms producing an asymptotic series for quantum correlation functions. On the other hand, graphs treated as "flowcharts", also form a combinatorial skeleton of the abstract computation theory. Partial recursive functions that according to Church's thesis exhaust the universe of (semi)computable maps are generally not everywhere defined due to potentially infinite searches and loops. In this paper I argue that such infinities can be addressed in the same way as Feynman divergences. More details can be found in [9,10].
Global Estimates of Errors in Quantum Computation by the Feynman-Vernon Formalism
NASA Astrophysics Data System (ADS)
Aurell, Erik
2018-06-01
The operation of a quantum computer is considered as a general quantum operation on a mixed state on many qubits followed by a measurement. The general quantum operation is further represented as a Feynman-Vernon double path integral over the histories of the qubits and of an environment, and afterward tracing out the environment. The qubit histories are taken to be paths on the two-sphere S^2 as in Klauder's coherent-state path integral of spin, and the environment is assumed to consist of harmonic oscillators initially in thermal equilibrium, and linearly coupled to to qubit operators \\hat{S}_z. The environment can then be integrated out to give a Feynman-Vernon influence action coupling the forward and backward histories of the qubits. This representation allows to derive in a simple way estimates that the total error of operation of a quantum computer without error correction scales linearly with the number of qubits and the time of operation. It also allows to discuss Kitaev's toric code interacting with an environment in the same manner.
Global Estimates of Errors in Quantum Computation by the Feynman-Vernon Formalism
NASA Astrophysics Data System (ADS)
Aurell, Erik
2018-04-01
The operation of a quantum computer is considered as a general quantum operation on a mixed state on many qubits followed by a measurement. The general quantum operation is further represented as a Feynman-Vernon double path integral over the histories of the qubits and of an environment, and afterward tracing out the environment. The qubit histories are taken to be paths on the two-sphere S^2 as in Klauder's coherent-state path integral of spin, and the environment is assumed to consist of harmonic oscillators initially in thermal equilibrium, and linearly coupled to to qubit operators \\hat{S}_z . The environment can then be integrated out to give a Feynman-Vernon influence action coupling the forward and backward histories of the qubits. This representation allows to derive in a simple way estimates that the total error of operation of a quantum computer without error correction scales linearly with the number of qubits and the time of operation. It also allows to discuss Kitaev's toric code interacting with an environment in the same manner.
Barnett, Lisa M; Morgan, Philip J; van Beurden, Eric; Beard, John R
2008-08-08
The purpose of this paper was to investigate whether perceived sports competence mediates the relationship between childhood motor skill proficiency and subsequent adolescent physical activity and fitness. In 2000, children's motor skill proficiency was assessed as part of a school-based physical activity intervention. In 2006/07, participants were followed up as part of the Physical Activity and Skills Study and completed assessments for perceived sports competence (Physical Self-Perception Profile), physical activity (Adolescent Physical Activity Recall Questionnaire) and cardiorespiratory fitness (Multistage Fitness Test). Structural equation modelling techniques were used to determine whether perceived sports competence mediated between childhood object control skill proficiency (composite score of kick, catch and overhand throw), and subsequent adolescent self-reported time in moderate-to-vigorous physical activity and cardiorespiratory fitness. Of 928 original intervention participants, 481 were located in 28 schools and 276 (57%) were assessed with at least one follow-up measure. Slightly more than half were female (52.4%) with a mean age of 16.4 years (range 14.2 to 18.3 yrs). Relevant assessments were completed by 250 (90.6%) students for the Physical Activity Model and 227 (82.3%) for the Fitness Model. Both hypothesised mediation models had a good fit to the observed data, with the Physical Activity Model accounting for 18% (R2 = 0.18) of physical activity variance and the Fitness Model accounting for 30% (R2 = 0.30) of fitness variance. Sex did not act as a moderator in either model. Developing a high perceived sports competence through object control skill development in childhood is important for both boys and girls in determining adolescent physical activity participation and fitness. Our findings highlight the need for interventions to target and improve the perceived sports competence of youth.
von Thiele Schwarz, Ulrica; Sjöberg, Anders; Hasson, Henna; Tafvelin, Susanne
2014-12-01
To test the factor structure and variance components of the productivity subscales of the Health and Work Questionnaire (HWQ). A total of 272 individuals from one company answered the HWQ scale, including three dimensions (efficiency, quality, and quantity) that the respondent rated from three perspectives: their own, their supervisor's, and their coworkers'. A confirmatory factor analysis was performed, and common and unique variance components evaluated. A common factor explained 81% of the variance (reliability 0.95). All dimensions and rater perspectives contributed with unique variance. The final model provided a perfect fit to the data. Efficiency, quality, and quantity and three rater perspectives are valid parts of the self-rated productivity measurement model, but with a large common factor. Thus, the HWQ can be analyzed either as one factor or by extracting the unique variance for each subdimension.
Adaptive cyclic physiologic noise modeling and correction in functional MRI.
Beall, Erik B
2010-03-30
Physiologic noise in BOLD-weighted MRI data is known to be a significant source of the variance, reducing the statistical power and specificity in fMRI and functional connectivity analyses. We show a dramatic improvement on current noise correction methods in both fMRI and fcMRI data that avoids overfitting. The traditional noise model is a Fourier series expansion superimposed on the periodicity of parallel measured breathing and cardiac cycles. Correction using this model results in removal of variance matching the periodicity of the physiologic cycles. Using this framework allows easy modeling of noise. However, using a large number of regressors comes at the cost of removing variance unrelated to physiologic noise, such as variance due to the signal of functional interest (overfitting the data). It is our hypothesis that there are a small variety of fits that describe all of the significantly coupled physiologic noise. If this is true, we can replace a large number of regressors used in the model with a smaller number of the fitted regressors and thereby account for the noise sources with a smaller reduction in variance of interest. We describe these extensions and demonstrate that we can preserve variance in the data unrelated to physiologic noise while removing physiologic noise equivalently, resulting in data with a higher effective SNR than with current corrections techniques. Our results demonstrate a significant improvement in the sensitivity of fMRI (up to a 17% increase in activation volume for fMRI compared with higher order traditional noise correction) and functional connectivity analyses. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Non-additive genetic variation in growth, carcass and fertility traits of beef cattle.
Bolormaa, Sunduimijid; Pryce, Jennie E; Zhang, Yuandan; Reverter, Antonio; Barendse, William; Hayes, Ben J; Goddard, Michael E
2015-04-02
A better understanding of non-additive variance could lead to increased knowledge on the genetic control and physiology of quantitative traits, and to improved prediction of the genetic value and phenotype of individuals. Genome-wide panels of single nucleotide polymorphisms (SNPs) have been mainly used to map additive effects for quantitative traits, but they can also be used to investigate non-additive effects. We estimated dominance and epistatic effects of SNPs on various traits in beef cattle and the variance explained by dominance, and quantified the increase in accuracy of phenotype prediction by including dominance deviations in its estimation. Genotype data (729 068 real or imputed SNPs) and phenotypes on up to 16 traits of 10 191 individuals from Bos taurus, Bos indicus and composite breeds were used. A genome-wide association study was performed by fitting the additive and dominance effects of single SNPs. The dominance variance was estimated by fitting a dominance relationship matrix constructed from the 729 068 SNPs. The accuracy of predicted phenotypic values was evaluated by best linear unbiased prediction using the additive and dominance relationship matrices. Epistatic interactions (additive × additive) were tested between each of the 28 SNPs that are known to have additive effects on multiple traits, and each of the other remaining 729 067 SNPs. The number of significant dominance effects was greater than expected by chance and most of them were in the direction that is presumed to increase fitness and in the opposite direction to inbreeding depression. Estimates of dominance variance explained by SNPs varied widely between traits, but had large standard errors. The median dominance variance across the 16 traits was equal to 5% of the phenotypic variance. Including a dominance deviation in the prediction did not significantly increase its accuracy for any of the phenotypes. The number of additive × additive epistatic effects that were statistically significant was greater than expected by chance. Significant dominance and epistatic effects occur for growth, carcass and fertility traits in beef cattle but they are difficult to estimate precisely and including them in phenotype prediction does not increase its accuracy.
Squeezed states, time-energy uncertainty relation, and Feynman's rest of the universe
NASA Technical Reports Server (NTRS)
Han, D.; Kim, Y. S.; Noz, Marilyn E.
1992-01-01
Two illustrative examples are given for Feynman's rest of the universe. The first example is the two-mode squeezed state of light where no measurement is taken for one of the modes. The second example is the relativistic quark model where no measurement is possible for the time-like separation fo quarks confined in a hadron. It is possible to illustrate these examples using the covariant oscillator formalism. It is shown that the lack of symmetry between the position-momentum and time-energy uncertainty relations leads to an increase in entropy when the system is different Lorentz frames.
On critical exponents without Feynman diagrams
NASA Astrophysics Data System (ADS)
Sen, Kallol; Sinha, Aninda
2016-11-01
In order to achieve a better analytic handle on the modern conformal bootstrap program, we re-examine and extend the pioneering 1974 work of Polyakov’s, which was based on consistency between the operator product expansion and unitarity. As in the bootstrap approach, this method does not depend on evaluating Feynman diagrams. We show how this approach can be used to compute the anomalous dimensions of certain operators in the O(n) model at the Wilson-Fisher fixed point in 4-ɛ dimensions up to O({ɛ }2). AS dedicates this work to the loving memory of his mother.
NASA Astrophysics Data System (ADS)
Naumenko, Mikhail; Samarin, Viacheslav
2018-02-01
Modern parallel computing algorithm has been applied to the solution of the few-body problem. The approach is based on Feynman's continual integrals method implemented in C++ programming language using NVIDIA CUDA technology. A wide range of 3-body and 4-body bound systems has been considered including nuclei described as consisting of protons and neutrons (e.g., 3,4He) and nuclei described as consisting of clusters and nucleons (e.g., 6He). The correctness of the results was checked by the comparison with the exactly solvable 4-body oscillatory system and experimental data.
Genetic Model Fitting in IQ, Assortative Mating & Components of IQ Variance.
ERIC Educational Resources Information Center
Capron, Christiane; Vetta, Adrian R.; Vetta, Atam
1998-01-01
The biometrical school of scientists who fit models to IQ data traces their intellectual ancestry to R. Fisher (1918), but their genetic models have no predictive value. Fisher himself was critical of the concept of heritability, because assortative mating, such as for IQ, introduces complexities into the study of a genetic trait. (SLD)
USDA-ARS?s Scientific Manuscript database
The objectives of this research were to estimate variance components for 6 common health events recorded by producers on U.S. dairy farms, as well as investigate correlations with fitness traits currently used for selection. Producer-recorded health event data were available from Dairy Records Manag...
Development and Initial Validation of the Multicultural Personality Inventory (MPI).
Ponterotto, Joseph G; Fietzer, Alexander W; Fingerhut, Esther C; Woerner, Scott; Stack, Lauren; Magaldi-Dopman, Danielle; Rust, Jonathan; Nakao, Gen; Tsai, Yu-Ting; Black, Natasha; Alba, Renaldo; Desai, Miraj; Frazier, Chantel; LaRue, Alyse; Liao, Pei-Wen
2014-01-01
Two studies summarize the development and initial validation of the Multicultural Personality Inventory (MPI). In Study 1, the 115-item prototype MPI was administered to 415 university students where exploratory factor analysis resulted in a 70-item, 7-factor model. In Study 2, the 70-item MPI and theoretically related companion instruments were administered to a multisite sample of 576 university students. Confirmatory factory analysis found the 7-factor structure to be a relatively good fit to the data (Comparative Fit Index =.954; root mean square error of approximation =.057), and MPI factors predicted variance in criterion variables above and beyond the variance accounted for by broad personality traits (i.e., Big Five). Study limitations and directions for further validation research are specified.
Direct and indirect genetic and fine-scale location effects on breeding date in song sparrows.
Germain, Ryan R; Wolak, Matthew E; Arcese, Peter; Losdat, Sylvain; Reid, Jane M
2016-11-01
Quantifying direct and indirect genetic effects of interacting females and males on variation in jointly expressed life-history traits is central to predicting microevolutionary dynamics. However, accurately estimating sex-specific additive genetic variances in such traits remains difficult in wild populations, especially if related individuals inhabit similar fine-scale environments. Breeding date is a key life-history trait that responds to environmental phenology and mediates individual and population responses to environmental change. However, no studies have estimated female (direct) and male (indirect) additive genetic and inbreeding effects on breeding date, and estimated the cross-sex genetic correlation, while simultaneously accounting for fine-scale environmental effects of breeding locations, impeding prediction of microevolutionary dynamics. We fitted animal models to 38 years of song sparrow (Melospiza melodia) phenology and pedigree data to estimate sex-specific additive genetic variances in breeding date, and the cross-sex genetic correlation, thereby estimating the total additive genetic variance while simultaneously estimating sex-specific inbreeding depression. We further fitted three forms of spatial animal model to explicitly estimate variance in breeding date attributable to breeding location, overlap among breeding locations and spatial autocorrelation. We thereby quantified fine-scale location variances in breeding date and quantified the degree to which estimating such variances affected the estimated additive genetic variances. The non-spatial animal model estimated nonzero female and male additive genetic variances in breeding date (sex-specific heritabilities: 0·07 and 0·02, respectively) and a strong, positive cross-sex genetic correlation (0·99), creating substantial total additive genetic variance (0·18). Breeding date varied with female, but not male inbreeding coefficient, revealing direct, but not indirect, inbreeding depression. All three spatial animal models estimated small location variance in breeding date, but because relatedness and breeding location were virtually uncorrelated, modelling location variance did not alter the estimated additive genetic variances. Our results show that sex-specific additive genetic effects on breeding date can be strongly positively correlated, which would affect any predicted rates of microevolutionary change in response to sexually antagonistic or congruent selection. Further, we show that inbreeding effects on breeding date can also be sex specific and that genetic effects can exceed phenotypic variation stemming from fine-scale location-based variation within a wild population. © 2016 The Authors. Journal of Animal Ecology © 2016 British Ecological Society.
Prediction-error variance in Bayesian model updating: a comparative study
NASA Astrophysics Data System (ADS)
Asadollahi, Parisa; Li, Jian; Huang, Yong
2017-04-01
In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model class level produces more robust results especially when the number of measurement is small.
Perturbation theory in light-cone quantization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langnau, A.
1992-01-01
A thorough investigation of light-cone properties which are characteristic for higher dimensions is very important. The easiest way of addressing these issues is by analyzing the perturbative structure of light-cone field theories first. Perturbative studies cannot be substituted for an analysis of problems related to a nonperturbative approach. However, in order to lay down groundwork for upcoming nonperturbative studies, it is indispensable to validate the renormalization methods at the perturbative level, i.e., to gain control over the perturbative treatment first. A clear understanding of divergences in perturbation theory, as well as their numerical treatment, is a necessary first step towardsmore » formulating such a program. The first objective of this dissertation is to clarify this issue, at least in second and fourth-order in perturbation theory. The work in this dissertation can provide guidance for the choice of counterterms in Discrete Light-Cone Quantization or the Tamm-Dancoff approach. A second objective of this work is the study of light-cone perturbation theory as a competitive tool for conducting perturbative Feynman diagram calculations. Feynman perturbation theory has become the most practical tool for computing cross sections in high energy physics and other physical properties of field theory. Although this standard covariant method has been applied to a great range of problems, computations beyond one-loop corrections are very difficult. Because of the algebraic complexity of the Feynman calculations in higher-order perturbation theory, it is desirable to automatize Feynman diagram calculations so that algebraic manipulation programs can carry out almost the entire calculation. This thesis presents a step in this direction. The technique we are elaborating on here is known as light-cone perturbation theory.« less
Applying Rasch model analysis in the development of the cantonese tone identification test (CANTIT).
Lee, Kathy Y S; Lam, Joffee H S; Chan, Kit T Y; van Hasselt, Charles Andrew; Tong, Michael C F
2017-01-01
Applying Rasch analysis to evaluate the internal structure of a lexical tone perception test known as the Cantonese Tone Identification Test (CANTIT). A 75-item pool (CANTIT-75) with pictures and sound tracks was developed. Respondents were required to make a four-alternative forced choice on each item. A short version of 30 items (CANTIT-30) was developed based on fit statistics, difficulty estimates, and content evaluation. Internal structure was evaluated by fit statistics and Rasch Factor Analysis (RFA). 200 children with normal hearing and 141 children with hearing impairment were recruited. For CANTIT-75, all infit and 97% of outfit values were < 2.0. RFA revealed 40.1% of total variance was explained by the Rasch measure. The first residual component explained 2.5% of total variance in an eigenvalue of 3.1. For CANTIT-30, all infit and outfit values were < 2.0. The Rasch measure explained 38.8% of total variance, the first residual component explained 3.9% of total variance in an eigenvalue of 1.9. The Rasch model provides excellent guidance for the development of short forms. Both CANTIT-75 and CANTIT-30 possess satisfactory internal structure as a construct validity evidence in measuring the lexical tone identification ability of the Cantonese speakers.
Odegård, J; Jensen, J; Madsen, P; Gianola, D; Klemetsdal, G; Heringstad, B
2003-11-01
The distribution of somatic cell scores could be regarded as a mixture of at least two components depending on a cow's udder health status. A heteroscedastic two-component Bayesian normal mixture model with random effects was developed and implemented via Gibbs sampling. The model was evaluated using datasets consisting of simulated somatic cell score records. Somatic cell score was simulated as a mixture representing two alternative udder health statuses ("healthy" or "diseased"). Animals were assigned randomly to the two components according to the probability of group membership (Pm). Random effects (additive genetic and permanent environment), when included, had identical distributions across mixture components. Posterior probabilities of putative mastitis were estimated for all observations, and model adequacy was evaluated using measures of sensitivity, specificity, and posterior probability of misclassification. Fitting different residual variances in the two mixture components caused some bias in estimation of parameters. When the components were difficult to disentangle, so were their residual variances, causing bias in estimation of Pm and of location parameters of the two underlying distributions. When all variance components were identical across mixture components, the mixture model analyses returned parameter estimates essentially without bias and with a high degree of precision. Including random effects in the model increased the probability of correct classification substantially. No sizable differences in probability of correct classification were found between models in which a single cow effect (ignoring relationships) was fitted and models where this effect was split into genetic and permanent environmental components, utilizing relationship information. When genetic and permanent environmental effects were fitted, the between-replicate variance of estimates of posterior means was smaller because the model accounted for random genetic drift.
Sources of genetic and phenotypic variance in fertilization rates and larval traits in a sea urchin.
Evans, Jonathan P; García-González, Francisco; Marshall, Dustin J
2007-12-01
In nonresource based mating systems females are thought to derive indirect genetic benefits by mating with high-quality males. Such benefits can be due either to the intrinsic genetic quality of sires or to beneficial interactions between maternal and paternal haplotypes. Animals with external fertilization and no parental care offer unrivaled opportunities to address these hypotheses. With these systems, cross-classified breeding designs and in vitro fertilization can be used to disentangle sources of genetic and environmental variance in offspring fitness. Here, we employ these approaches in the Australian sea urchin Heliocidaris erythrogramma and explore how sire-dam identities influence fertilization rates, embryo viability (survival to hatching), and metamorphosis, as well as the interrelationships between these potential fitness traits. We show that fertilization is influenced by a combination of strong maternal effects and intrinsic male effects. Our subsequent analysis of embryo viability, however, revealed a highly significant interaction between parental genotypes, indicating that partial incompatibilities can severely limit offspring survival at this life-history stage. Importantly, we detected no significant relationship between fertilization rates and embryo viability. This finding suggests that fertilization rates should not be inferred from hatching rates, which is commonly practiced in species in which it is not possible to estimate fertilization at conception. Finally, we detected significant additive genetic variance due to sires in rates of juvenile metamorphosis, and a positive correlation between fertilization rates and metamorphosis. This latter finding indicates that the performance of a male's ejaculate in noncompetitive IVF trials predicts heritable offspring traits, although the fitness implications of variance in rates of spontaneous juvenile metamorphosis have yet to be determined.
Genome-Assisted Prediction of Quantitative Traits Using the R Package sommer.
Covarrubias-Pazaran, Giovanny
2016-01-01
Most traits of agronomic importance are quantitative in nature, and genetic markers have been used for decades to dissect such traits. Recently, genomic selection has earned attention as next generation sequencing technologies became feasible for major and minor crops. Mixed models have become a key tool for fitting genomic selection models, but most current genomic selection software can only include a single variance component other than the error, making hybrid prediction using additive, dominance and epistatic effects unfeasible for species displaying heterotic effects. Moreover, Likelihood-based software for fitting mixed models with multiple random effects that allows the user to specify the variance-covariance structure of random effects has not been fully exploited. A new open-source R package called sommer is presented to facilitate the use of mixed models for genomic selection and hybrid prediction purposes using more than one variance component and allowing specification of covariance structures. The use of sommer for genomic prediction is demonstrated through several examples using maize and wheat genotypic and phenotypic data. At its core, the program contains three algorithms for estimating variance components: Average information (AI), Expectation-Maximization (EM) and Efficient Mixed Model Association (EMMA). Kernels for calculating the additive, dominance and epistatic relationship matrices are included, along with other useful functions for genomic analysis. Results from sommer were comparable to other software, but the analysis was faster than Bayesian counterparts in the magnitude of hours to days. In addition, ability to deal with missing data, combined with greater flexibility and speed than other REML-based software was achieved by putting together some of the most efficient algorithms to fit models in a gentle environment such as R.
ERIC Educational Resources Information Center
Voss, Michelle W.; Erickson, Kirk I.; Prakash, Ruchika S.; Chaddock, Laura; Malkowski, Edward; Alves, Heloisa; Kim, Jennifer S.; Morris, Katherine S.; White, Siobhan M.; Wojcicki, Thomas R.; Hu, Liang; Szabo, Amanda; Klamm, Emily; McAuley, Edward; Kramer, Arthur F.
2010-01-01
Over the next 20 years the number of Americans diagnosed with dementia is expected to more than double (CDC, 2007). It is, therefore, an important public health initiative to understand what factors contribute to the longevity of a healthy mind. Both default mode network (DMN) function and increased aerobic fitness have been associated with better…
Rönnegård, L; Felleki, M; Fikse, W F; Mulder, H A; Strandberg, E
2013-04-01
Trait uniformity, or micro-environmental sensitivity, may be studied through individual differences in residual variance. These differences appear to be heritable, and the need exists, therefore, to fit models to predict breeding values explaining differences in residual variance. The aim of this paper is to estimate breeding values for micro-environmental sensitivity (vEBV) in milk yield and somatic cell score, and their associated variance components, on a large dairy cattle data set having more than 1.6 million records. Estimation of variance components, ordinary breeding values, and vEBV was performed using standard variance component estimation software (ASReml), applying the methodology for double hierarchical generalized linear models. Estimation using ASReml took less than 7 d on a Linux server. The genetic standard deviations for residual variance were 0.21 and 0.22 for somatic cell score and milk yield, respectively, which indicate moderate genetic variance for residual variance and imply that a standard deviation change in vEBV for one of these traits would alter the residual variance by 20%. This study shows that estimation of variance components, estimated breeding values and vEBV, is feasible for large dairy cattle data sets using standard variance component estimation software. The possibility to select for uniformity in Holstein dairy cattle based on these estimates is discussed. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Estimation of genetic parameters for milk yield in Murrah buffaloes by Bayesian inference.
Breda, F C; Albuquerque, L G; Euclydes, R F; Bignardi, A B; Baldi, F; Torres, R A; Barbosa, L; Tonhati, H
2010-02-01
Random regression models were used to estimate genetic parameters for test-day milk yield in Murrah buffaloes using Bayesian inference. Data comprised 17,935 test-day milk records from 1,433 buffaloes. Twelve models were tested using different combinations of third-, fourth-, fifth-, sixth-, and seventh-order orthogonal polynomials of weeks of lactation for additive genetic and permanent environmental effects. All models included the fixed effects of contemporary group, number of daily milkings and age of cow at calving as covariate (linear and quadratic effect). In addition, residual variances were considered to be heterogeneous with 6 classes of variance. Models were selected based on the residual mean square error, weighted average of residual variance estimates, and estimates of variance components, heritabilities, correlations, eigenvalues, and eigenfunctions. Results indicated that changes in the order of fit for additive genetic and permanent environmental random effects influenced the estimation of genetic parameters. Heritability estimates ranged from 0.19 to 0.31. Genetic correlation estimates were close to unity between adjacent test-day records, but decreased gradually as the interval between test-days increased. Results from mean squared error and weighted averages of residual variance estimates suggested that a model considering sixth- and seventh-order Legendre polynomials for additive and permanent environmental effects, respectively, and 6 classes for residual variances, provided the best fit. Nevertheless, this model presented the largest degree of complexity. A more parsimonious model, with fourth- and sixth-order polynomials, respectively, for these same effects, yielded very similar genetic parameter estimates. Therefore, this last model is recommended for routine applications. Copyright 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Bold Diagrammatic Monte Carlo for Fermionic and Fermionized Systems
NASA Astrophysics Data System (ADS)
Svistunov, Boris
2013-03-01
In three different fermionic cases--repulsive Hubbard model, resonant fermions, and fermionized spins-1/2 (on triangular lattice)--we observe the phenomenon of sign blessing: Feynman diagrammatic series features finite convergence radius despite factorial growth of the number of diagrams with diagram order. Bold diagrammatic Monte Carlo technique allows us to sample millions of skeleton Feynman diagrams. With the universal fermionization trick we can fermionize essentially any (bosonic, spin, mixed, etc.) lattice system. The combination of fermionization and Bold diagrammatic Monte Carlo yields a universal first-principle approach to strongly correlated lattice systems, provided the sign blessing is a generic fermionic phenomenon. Supported by NSF and DARPA
Poisson equation for the Mercedes diagram in string theory at genus one
NASA Astrophysics Data System (ADS)
Basu, Anirban
2016-03-01
The Mercedes diagram has four trivalent vertices which are connected by six links such that they form the edges of a tetrahedron. This three-loop Feynman diagram contributes to the {D}12{{ R }}4 amplitude at genus one in type II string theory, where the vertices are the points of insertion of the graviton vertex operators, and the links are the scalar propagators on the toroidal worldsheet. We obtain a modular invariant Poisson equation satisfied by the Mercedes diagram, where the source terms involve one- and two-loop Feynman diagrams. We calculate its contribution to the {D}12{{ R }}4 amplitude.
Electron propagator calculations on the ionization energies of CrH -, MnH - and FeH -
NASA Astrophysics Data System (ADS)
Lin, Jyh-Shing; Ortiz, J. V.
1990-08-01
Electron propagator calculations with unrestricted Hartree-Fock reference states yield the ionization energies of the title anions. Spin contamination in the anionic reference state is small, enabling the use of second-and third-order self-energies in the Dyson equation. Feynman-Dyson amplitudes for these ionizations are essentially identical to canonical spin-orbitals. For most of the final states, these consist of an antibonding combination of an sp metal hybrid, polarized away from the hydrogen, and hydroegen s functions. In one case, the Feynman-Dyson amplitude consists of nonbonding d functions. Calculated ionization energies are within 0.5 eV of experiment.
Gravity, Time, and Lagrangians
NASA Astrophysics Data System (ADS)
Huggins, Elisha
2010-11-01
Feynman mentioned to us that he understood a topic in physics if he could explain it to a college freshman, a high school student, or a dinner guest. Here we will discuss two topics that took us a while to get to that level. One is the relationship between gravity and time. The other is the minus sign that appears in the Lagrangian. (Why would one subtract potential energy from kinetic energy?) In this paper we discuss a thought experiment that relates gravity and time. Then we use a Feynman thought experiment to explain the minus sign in the Lagrangian. Our surprise was that these two topics are related.
Extended Hellmann-Feynman theorem for degenerate eigenstates
NASA Astrophysics Data System (ADS)
Zhang, G. P.; George, Thomas F.
2004-04-01
In a previous paper, we reported a failure of the traditional Hellmann-Feynman theorem (HFT) for degenerate eigenstates. This has generated enormous interest among different groups. In four independent papers by Fernandez, by Balawender, Hola, and March, by Vatsya, and by Alon and Cederbaum, an elegant method to solve the problem was devised. The main idea is that one has to construct and diagonalize the force matrix for the degenerate case, and only the eigenforces are well defined. We believe this is an important extension to HFT. Using our previous example for an energy level of fivefold degeneracy, we find that those eigenforces correctly reflect the symmetry of the molecule.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altmeyer, Michaela; Guterding, Daniel; Hirschfeld, P. J.
2016-12-21
In the framework of a multiorbital Hubbard model description of superconductivity, a matrix formulation of the superconducting pairing interaction that has been widely used is designed to treat spin, charge, and orbital fluctuations within a random phase approximation (RPA). In terms of Feynman diagrams, this takes into account particle-hole ladder and bubble contributions as expected. It turns out, however, that this matrix formulation also generates additional terms which have the diagrammatic structure of vertex corrections. Furthermore we examine these terms and discuss the relationship between the matrix-RPA superconducting pairing interaction and the Feynman diagrams that it sums.
Feynman rules for the Standard Model Effective Field Theory in R ξ -gauges
NASA Astrophysics Data System (ADS)
Dedes, A.; Materkowska, W.; Paraskevas, M.; Rosiek, J.; Suxho, K.
2017-06-01
We assume that New Physics effects are parametrized within the Standard Model Effective Field Theory (SMEFT) written in a complete basis of gauge invariant operators up to dimension 6, commonly referred to as "Warsaw basis". We discuss all steps necessary to obtain a consistent transition to the spontaneously broken theory and several other important aspects, including the BRST-invariance of the SMEFT action for linear R ξ -gauges. The final theory is expressed in a basis characterized by SM-like propagators for all physical and unphysical fields. The effect of the non-renormalizable operators appears explicitly in triple or higher multiplicity vertices. In this mass basis we derive the complete set of Feynman rules, without resorting to any simplifying assumptions such as baryon-, lepton-number or CP conservation. As it turns out, for most SMEFT vertices the expressions are reasonably short, with a noticeable exception of those involving 4, 5 and 6 gluons. We have also supplemented our set of Feynman rules, given in an appendix here, with a publicly available Mathematica code working with the FeynRules package and producing output which can be integrated with other symbolic algebra or numerical codes for automatic SMEFT amplitude calculations.
Analysis of 20 magnetic clouds at 1 AU during a solar minimum
NASA Astrophysics Data System (ADS)
Gulisano, A. M.; Dasso, S.; Mandrini, C. H.; Démoulin, P.
We study 20 magnetic clouds, observed in situ by the spacecraft Wind, at the Lagrangian point L1, from 22 August, 1995, to 7 November, 1997. In previous works, assuming a cylindrical symmetry for the local magnetic configuration and a satellite trajectory crossing the axis of the cloud, we obtained their orientations using a minimum variance analysis. In this work we compute the orientations and magnetic configurations using a non-linear simultaneous fit of the geometric and physical parameters for a linear force-free model, including the possibility of a not null impact parameter. We quantify global magnitudes such as the relative magnetic helicity per unit length and compare the values found with both methods (minimum variance and the simultaneous fit). FULL TEXT IN SPANISH
Heritability of boldness and aggressiveness in the zebrafish.
Ariyomo, Tolulope O; Carter, Mauricio; Watt, Penelope J
2013-03-01
Behavioural traits that are consistent over time and in different contexts are often referred to as personality traits. These traits influence fitness because they play a major role in foraging, reproduction and survival, and so it is assumed that they have little or no additive genetic variance and, consequently, low heritability because, theoretically, they are under strong selection. Boldness and aggressiveness are two personality traits that have been shown to affect fitness. By crossing single males to multiple females, we estimated the heritability of boldness and aggressiveness in the zebrafish, Danio rerio. The additive genetic variance was statistically significant for both traits and the heritability estimates (95 % confidence intervals) for boldness and aggressiveness were 0.76 (0.49, 0.90) and 0.36 (0.10, 0.72) respectively. Furthermore, there were significant maternal effects accounting for 18 and 9 % of the proportion of phenotypic variance in boldness and aggressiveness respectively. This study shows that there is a significant level of genetic variation in this population that would allow these traits to evolve in response to selection.
A note on the misuses of the variance test in meteorological studies
NASA Astrophysics Data System (ADS)
Hazra, Arnab; Bhattacharya, Sourabh; Banik, Pabitra; Bhattacharya, Sabyasachi
2017-12-01
Stochastic modeling of rainfall data is an important area in meteorology. The gamma distribution is a widely used probability model for non-zero rainfall. Typically the choice of the distribution for such meteorological studies is based on two goodness-of-fit tests—the Pearson's Chi-square test and the Kolmogorov-Smirnov test. Inspired by the index of dispersion introduced by Fisher (Statistical methods for research workers. Hafner Publishing Company Inc., New York, 1925), Mooley (Mon Weather Rev 101:160-176, 1973) proposed the variance test as a goodness-of-fit measure in this context and a number of researchers have implemented it since then. We show that the asymptotic distribution of the test statistic for the variance test is generally not comparable to any central Chi-square distribution and hence the test is erroneous. We also describe a method for checking the validity of the asymptotic distribution for a class of distributions. We implement the erroneous test on some simulated, as well as real datasets and demonstrate how it leads to some wrong conclusions.
Proyer, René T; Wellenzohn, Sara; Gander, Fabian; Ruch, Willibald
2015-03-01
Robust evidence exists that positive psychology interventions are effective in enhancing well-being and ameliorating depression. Comparatively little is known about the conditions under which they work best. Models describing characteristics that impact the effectiveness of positive interventions typically contain features of the person, of the activity, and the fit between the two. This study focuses on indicators of the person × intervention fit in predicting happiness and depressive symptoms 3.5 years after completion of the intervention. A sample of 165 women completed measures for happiness and depressive symptoms before and about 3.5 years after completion of a positive intervention (random assignment to one out of nine interventions, which were aggregated for the analyses). Four fit indicators were assessed: Preference; continued practice; effort; and early reactivity. Three out of four person × intervention fit indicators were positively related to happiness or negatively related to depression when controlled for the pretest scores. Together, they explained 6 per cent of the variance in happiness, and 10 per cent of the variance of depressive symptoms. Most tested indicators of a person × intervention fit are robust predictors of happiness and depressive symptoms-even after 3.5 years. They might serve for an early estimation of the effectiveness of a positive intervention. © 2014 The International Association of Applied Psychology.
Tinker, M. Tim; Estes, James A.; Staedler, Michelle; Bodkin, James L.; Tinker, M. Tim; Estes, James A.; Ralls, Katherine; Williams, Terrie M.; Jessup, David A.; Costa, Daniel P.
2006-01-01
Longitudinal foraging data collected from 60 sea otters implanted with VHF radio transmitters at two study sites in Central California over a three-year period demonstrated even greater individual dietary specialization than in previous studies, with only 54% dietary overlap between individuals and the population.Multivariate statistical analyses indicated that individual diets could be grouped into three general "diet types" representing distinct foraging specializations. Type 1 specialists consumed large size prey but had low dive efficiency, Type 2 specialists consumed small to medium size prey with high dive efficiency, and Type 3 specialists consumed very small prey (mainly snails) with very high dive efficiency.The mean rate of energy gain for the population as a whole was low when compared to other sea otter populations in Alaska but showed a high degree of within- and betweenindividual variation, much of which was accounted for by the three foraging strategies. Type 1 specialists had the highest mean energy gain but also the highest withinindividual variance in energy gain. Type 2 specialists had the lowest mean energy gain but also the lowest variance. Type 3 specialists had an intermediate mean and variance. All three strategies resulted in very similar probabilities of exceeding a critical rate of energy gain on any given day.Correlational selection may help maintain multiple foraging strategies in the population: a fitness surface (using mean rate of energy gain as a proxy for fitness) fit to the first two principal components of foraging behavior suggested that the three foraging strategies occupy separate fitness peaks.Food limitation is likely an important ultimate factor restricting population growth in the center of the population’s range in California, although the existence of alternative foraging strategies results in different impacts of food limitation on individuals and thus may obscure expected patterns of density dependence.
NASA Astrophysics Data System (ADS)
Chakraborty, Arup
No medical procedure has saved more lives than vaccination. But, today, some pathogens have evolved which have defied successful vaccination using the empirical paradigms pioneered by Pasteur and Jenner. One characteristic of many pathogens for which successful vaccines do not exist is that they present themselves in various guises. HIV is an extreme example because of its high mutability. This highly mutable virus can evade natural or vaccine induced immune responses, often by mutating at multiple sites linked by compensatory interactions. I will describe first how by bringing to bear ideas from statistical physics (e.g., maximum entropy models, Hopfield models, Feynman variational theory) together with in vitro experiments and clinical data, the fitness landscape of HIV is beginning to be defined with explicit account for collective mutational pathways. I will describe how this knowledge can be harnessed for vaccine design. Finally, I will describe how ideas at the intersection of evolutionary biology, immunology, and statistical physics can help guide the design of strategies that may be able to induce broadly neutralizing antibodies.
ERIC Educational Resources Information Center
Tipton, Elizabeth; Pustejovsky, James E.
2015-01-01
Randomized experiments are commonly used to evaluate the effectiveness of educational interventions. The goal of the present investigation is to develop small-sample corrections for multiple contrast hypothesis tests (i.e., F-tests) such as the omnibus test of meta-regression fit or a test for equality of three or more levels of a categorical…
Barnett, Lisa M; Morgan, Philip J; van Beurden, Eric; Beard, John R
2008-01-01
Background The purpose of this paper was to investigate whether perceived sports competence mediates the relationship between childhood motor skill proficiency and subsequent adolescent physical activity and fitness. Methods In 2000, children's motor skill proficiency was assessed as part of a school-based physical activity intervention. In 2006/07, participants were followed up as part of the Physical Activity and Skills Study and completed assessments for perceived sports competence (Physical Self-Perception Profile), physical activity (Adolescent Physical Activity Recall Questionnaire) and cardiorespiratory fitness (Multistage Fitness Test). Structural equation modelling techniques were used to determine whether perceived sports competence mediated between childhood object control skill proficiency (composite score of kick, catch and overhand throw), and subsequent adolescent self-reported time in moderate-to-vigorous physical activity and cardiorespiratory fitness. Results Of 928 original intervention participants, 481 were located in 28 schools and 276 (57%) were assessed with at least one follow-up measure. Slightly more than half were female (52.4%) with a mean age of 16.4 years (range 14.2 to 18.3 yrs). Relevant assessments were completed by 250 (90.6%) students for the Physical Activity Model and 227 (82.3%) for the Fitness Model. Both hypothesised mediation models had a good fit to the observed data, with the Physical Activity Model accounting for 18% (R2 = 0.18) of physical activity variance and the Fitness Model accounting for 30% (R2 = 0.30) of fitness variance. Sex did not act as a moderator in either model. Conclusion Developing a high perceived sports competence through object control skill development in childhood is important for both boys and girls in determining adolescent physical activity participation and fitness. Our findings highlight the need for interventions to target and improve the perceived sports competence of youth. PMID:18687148
Deleterious Mutations, Apparent Stabilizing Selection and the Maintenance of Quantitative Variation
Kondrashov, A. S.; Turelli, M.
1992-01-01
Apparent stabilizing selection on a quantitative trait that is not causally connected to fitness can result from the pleiotropic effects of unconditionally deleterious mutations, because as N. Barton noted, ``... individuals with extreme values of the trait will tend to carry more deleterious alleles ....'' We use a simple model to investigate the dependence of this apparent selection on the genomic deleterious mutation rate, U; the equilibrium distribution of K, the number of deleterious mutations per genome; and the parameters describing directional selection against deleterious mutations. Unlike previous analyses, we allow for epistatic selection against deleterious alleles. For various selection functions and realistic parameter values, the distribution of K, the distribution of breeding values for a pleiotropically affected trait, and the apparent stabilizing selection function are all nearly Gaussian. The additive genetic variance for the quantitative trait is kQa(2), where k is the average number of deleterious mutations per genome, Q is the proportion of deleterious mutations that affect the trait, and a(2) is the variance of pleiotropic effects for individual mutations that do affect the trait. In contrast, when the trait is measured in units of its additive standard deviation, the apparent fitness function is essentially independent of Q and a(2); and β, the intensity of selection, measured as the ratio of additive genetic variance to the ``variance'' of the fitness curve, is very close to s = U/k, the selection coefficient against individual deleterious mutations at equilibrium. Therefore, this model predicts appreciable apparent stabilizing selection if s exceeds about 0.03, which is consistent with various data. However, the model also predicts that β must equal V(m)/V(G), the ratio of new additive variance for the trait introduced each generation by mutation to the standing additive variance. Most, although not all, estimates of this ratio imply apparent stabilizing selection weaker than generally observed. A qualitative argument suggests that even when direct selection is responsible for most of the selection observed on a character, it may be essentially irrelevant to the maintenance of variation for the character by mutation-selection balance. Simple experiments can indicate the fraction of observed stabilizing selection attributable to the pleiotropic effects of deleterious mutations. PMID:1427047
Neutrino oscillation processes in a quantum-field-theoretical approach
NASA Astrophysics Data System (ADS)
Egorov, Vadim O.; Volobuev, Igor P.
2018-05-01
It is shown that neutrino oscillation processes can be consistently described in the framework of quantum field theory using only the plane wave states of the particles. Namely, the oscillating electron survival probabilities in experiments with neutrino detection by charged-current and neutral-current interactions are calculated in the quantum field-theoretical approach to neutrino oscillations based on a modification of the Feynman propagator in the momentum representation. The approach is most similar to the standard Feynman diagram technique. It is found that the oscillating distance-dependent probabilities of detecting an electron in experiments with neutrino detection by charged-current and neutral-current interactions exactly coincide with the corresponding probabilities calculated in the standard approach.
NASA Technical Reports Server (NTRS)
Straton, Jack C.
1989-01-01
The Fourier transform of the multicenter product of N 1s hydrogenic orbitals and M Coulomb or Yukawa potentials is given as an (M+N-1)-dimensional Feynman integral with external momenta and shifted coordinates. This is accomplished through the introduction of an integral transformation, in addition to the standard Feynman transformation for the denominators of the momentum representation of the terms in the product, which moves the resulting denominator into an exponential. This allows the angular dependence of the denominator to be combined with the angular dependence in the plane waves.
Bosonic Loop Diagrams as Perturbative Solutions of the Classical Field Equations in ϕ4-Theory
NASA Astrophysics Data System (ADS)
Finster, Felix; Tolksdorf, Jürgen
2012-05-01
Solutions of the classical ϕ4-theory in Minkowski space-time are analyzed in a perturbation expansion in the nonlinearity. Using the language of Feynman diagrams, the solution of the Cauchy problem is expressed in terms of tree diagrams which involve the retarded Green's function and have one outgoing leg. In order to obtain general tree diagrams, we set up a "classical measurement process" in which a virtual observer of a scattering experiment modifies the field and detects suitable energy differences. By adding a classical stochastic background field, we even obtain all loop diagrams. The expansions are compared with the standard Feynman diagrams of the corresponding quantum field theory.
Non-planar one-loop Parke-Taylor factors in the CHY approach for quadratic propagators
NASA Astrophysics Data System (ADS)
Ahmadiniaz, Naser; Gomez, Humberto; Lopez-Arcos, Cristhiam
2018-05-01
In this work we have studied the Kleiss-Kuijf relations for the recently introduced Parke-Taylor factors at one-loop in the CHY approach, that reproduce quadratic Feynman propagators. By doing this, we were able to identify the non-planar one-loop Parke-Taylor factors. In order to check that, in fact, these new factors can describe non-planar amplitudes, we applied them to the bi-adjoint Φ3 theory. As a byproduct, we found a new type of graphs that we called the non-planar CHY-graphs. These graphs encode all the information for the subleading order at one-loop, and there is not an equivalent of these in the Feynman formalism.
Genetic diversity in the interference selection limit.
Good, Benjamin H; Walczak, Aleksandra M; Neher, Richard A; Desai, Michael M
2014-03-01
Pervasive natural selection can strongly influence observed patterns of genetic variation, but these effects remain poorly understood when multiple selected variants segregate in nearby regions of the genome. Classical population genetics fails to account for interference between linked mutations, which grows increasingly severe as the density of selected polymorphisms increases. Here, we describe a simple limit that emerges when interference is common, in which the fitness effects of individual mutations play a relatively minor role. Instead, similar to models of quantitative genetics, molecular evolution is determined by the variance in fitness within the population, defined over an effectively asexual segment of the genome (a "linkage block"). We exploit this insensitivity in a new "coarse-grained" coalescent framework, which approximates the effects of many weakly selected mutations with a smaller number of strongly selected mutations that create the same variance in fitness. This approximation generates accurate and efficient predictions for silent site variability when interference is common. However, these results suggest that there is reduced power to resolve individual selection pressures when interference is sufficiently widespread, since a broad range of parameters possess nearly identical patterns of silent site variability.
Genotype to Phenotype Mapping of the E. coli lac Promoter
NASA Astrophysics Data System (ADS)
Otwinowski, Jakub; Nemenman, Ilya
2014-03-01
Genotype-to-phenotype maps and the related fitness landscapes that include epistatic interactions are difficult to measure because of their high dimensional structure. Here we construct such a map using the recently collected corpora of high-throughput sequence data from the 75 base pairs long mutagenized E. coli lac promoter region, where each sequence is associated with induced transcriptional activity measured by a fluorescent reporter. We find that the additive (non-epistatic) contributions of individual mutations account for about two-thirds of the explainable phenotype variance, while pairwise epistasis explains about 7% of the variance for the full mutagenized sequence and about 15% for the subsequence associated with protein binding sites. Surprisingly, there is no evidence for third order epistatic contributions, and our inferred fitness landscape is essentially single peaked, with a small amount of antagonistic epistasis. We identify transcription factor (CRP) and RNA polymerase binding sites in the promotor region and their interactions. We conclude with a cautionary note that inferred properties of fitness landscapes may be severely influenced by biases in the sequence data. Funded in part by HFSP and James S. McDonnell Foundation.
Modeling Invasion Dynamics with Spatial Random-Fitness Due to Micro-Environment
Manem, V. S. K.; Kaveh, K.; Kohandel, M.; Sivaloganathan, S.
2015-01-01
Numerous experimental studies have demonstrated that the microenvironment is a key regulator influencing the proliferative and migrative potentials of species. Spatial and temporal disturbances lead to adverse and hazardous microenvironments for cellular systems that is reflected in the phenotypic heterogeneity within the system. In this paper, we study the effect of microenvironment on the invasive capability of species, or mutants, on structured grids (in particular, square lattices) under the influence of site-dependent random proliferation in addition to a migration potential. We discuss both continuous and discrete fitness distributions. Our results suggest that the invasion probability is negatively correlated with the variance of fitness distribution of mutants (for both advantageous and neutral mutants) in the absence of migration of both types of cells. A similar behaviour is observed even in the presence of a random fitness distribution of host cells in the system with neutral fitness rate. In the case of a bimodal distribution, we observe zero invasion probability until the system reaches a (specific) proportion of advantageous phenotypes. Also, we find that the migrative potential amplifies the invasion probability as the variance of fitness of mutants increases in the system, which is the exact opposite in the absence of migration. Our computational framework captures the harsh microenvironmental conditions through quenched random fitness distributions and migration of cells, and our analysis shows that they play an important role in the invasion dynamics of several biological systems such as bacterial micro-habitats, epithelial dysplasia, and metastasis. We believe that our results may lead to more experimental studies, which can in turn provide further insights into the role and impact of heterogeneous environments on invasion dynamics. PMID:26509572
Zhai, Xuetong; Chakraborty, Dev P
2017-06-01
The objective was to design and implement a bivariate extension to the contaminated binormal model (CBM) to fit paired receiver operating characteristic (ROC) datasets-possibly degenerate-with proper ROC curves. Paired datasets yield two correlated ratings per case. Degenerate datasets have no interior operating points and proper ROC curves do not inappropriately cross the chance diagonal. The existing method, developed more than three decades ago utilizes a bivariate extension to the binormal model, implemented in CORROC2 software, which yields improper ROC curves and cannot fit degenerate datasets. CBM can fit proper ROC curves to unpaired (i.e., yielding one rating per case) and degenerate datasets, and there is a clear scientific need to extend it to handle paired datasets. In CBM, nondiseased cases are modeled by a probability density function (pdf) consisting of a unit variance peak centered at zero. Diseased cases are modeled with a mixture distribution whose pdf consists of two unit variance peaks, one centered at positive μ with integrated probability α, the mixing fraction parameter, corresponding to the fraction of diseased cases where the disease was visible to the radiologist, and one centered at zero, with integrated probability (1-α), corresponding to disease that was not visible. It is shown that: (a) for nondiseased cases the bivariate extension is a unit variances bivariate normal distribution centered at (0,0) with a specified correlation ρ 1 ; (b) for diseased cases the bivariate extension is a mixture distribution with four peaks, corresponding to disease not visible in either condition, disease visible in only one condition, contributing two peaks, and disease visible in both conditions. An expression for the likelihood function is derived. A maximum likelihood estimation (MLE) algorithm, CORCBM, was implemented in the R programming language that yields parameter estimates and the covariance matrix of the parameters, and other statistics. A limited simulation validation of the method was performed. CORCBM and CORROC2 were applied to two datasets containing nine readers each contributing paired interpretations. CORCBM successfully fitted the data for all readers, whereas CORROC2 failed to fit a degenerate dataset. All fits were visually reasonable. All CORCBM fits were proper, whereas all CORROC2 fits were improper. CORCBM and CORROC2 were in agreement (a) in declaring only one of the nine readers as having significantly different performances in the two modalities; (b) in estimating higher correlations for diseased cases than for nondiseased ones; and (c) in finding that the intermodality correlation estimates for nondiseased cases were consistent between the two methods. All CORCBM fits yielded higher area under curve (AUC) than the CORROC2 fits, consistent with the fact that a proper ROC model like CORCBM is based on a likelihood-ratio-equivalent decision variable, and consequently yields higher performance than the binormal model-based CORROC2. The method gave satisfactory fits to four simulated datasets. CORCBM is a robust method for fitting paired ROC datasets, always yielding proper ROC curves, and able to fit degenerate datasets. © 2017 American Association of Physicists in Medicine.
Yang, Hui-Ju; Chen, Kuei-Min; Chen, Ming-De; Wu, Hui-Chuan; Chang, Wen-Jane; Wang, Yueh-Chin; Huang, Hsin-Ting
2015-10-01
The transtheoretical model was applied to promote behavioural change and test the effects of a group senior elastic band exercise programme on the functional fitness of community older adults in the contemplation and preparation stages of behavioural change. Forming regular exercise habits is challenging for older adults. The transtheoretical model emphasizes using different strategies in various stages to facilitate behavioural changes. Quasi-experimental design with pre-test and post-tests on two groups. Six senior activity centres were randomly assigned to either the experimental or control group. The data were collected during 2011. A total of 199 participants were recruited and 169 participants completed the study (experimental group n = 84, control group n = 85). The elastic band exercises were performed for 40 minutes, three times per week for 6 months. The functional fitness of the participants was evaluated at baseline and at the third and sixth month of the intervention. Statistical analyses included a two-way mixed design analysis of variance, one-way repeated measures analysis of variance and an analysis of covariance. All of the functional fitness indicators had significant changes at post-tests from pre-test in the experimental group. The experimental group had better performances than the control group in all of the functional fitness indicators after three months and 6 months of the senior elastic band exercises. The exercise programme provided older adults with appropriate strategies for maintaining functional fitness, which improved significantly after the participants exercising regularly for 6 months. © 2015 John Wiley & Sons Ltd.
Hu, Pingsha; Maiti, Tapabrata
2011-01-01
Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS) to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request.
Hu, Pingsha; Maiti, Tapabrata
2011-01-01
Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS) to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request. PMID:21611181
Efficient numerical evaluation of Feynman integrals
NASA Astrophysics Data System (ADS)
Li, Zhao; Wang, Jian; Yan, Qi-Shu; Zhao, Xiaoran
2016-03-01
Feynman loop integrals are a key ingredient for the calculation of higher order radiation effects, and are responsible for reliable and accurate theoretical prediction. We improve the efficiency of numerical integration in sector decomposition by implementing a quasi-Monte Carlo method associated with the CUDA/GPU technique. For demonstration we present the results of several Feynman integrals up to two loops in both Euclidean and physical kinematic regions in comparison with those obtained from FIESTA3. It is shown that both planar and non-planar two-loop master integrals in the physical kinematic region can be evaluated in less than half a minute with accuracy, which makes the direct numerical approach viable for precise investigation of higher order effects in multi-loop processes, e.g. the next-to-leading order QCD effect in Higgs pair production via gluon fusion with a finite top quark mass. Supported by the Natural Science Foundation of China (11305179 11475180), Youth Innovation Promotion Association, CAS, IHEP Innovation (Y4545170Y2), State Key Lab for Electronics and Particle Detectors, Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y4KF061CJ1), Cluster of Excellence Precision Physics, Fundamental Interactions and Structure of Matter (PRISMA-EXC 1098)
A white noise approach to the Feynman integrand for electrons in random media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grothaus, M., E-mail: grothaus@mathematik.uni-kl.de; Riemann, F., E-mail: riemann@mathematik.uni-kl.de; Suryawan, H. P., E-mail: suryawan@mathematik.uni-kl.de
2014-01-15
Using the Feynman path integral representation of quantum mechanics it is possible to derive a model of an electron in a random system containing dense and weakly coupled scatterers [see F. Edwards and Y. B. Gulyaev, “The density of states of a highly impure semiconductor,” Proc. Phys. Soc. 83, 495–496 (1964)]. The main goal of this paper is to give a mathematically rigorous realization of the corresponding Feynman integrand in dimension one based on the theory of white noise analysis. We refine and apply a Wick formula for the product of a square-integrable function with Donsker's delta functions and usemore » a method of complex scaling. As an essential part of the proof we also establish the existence of the exponential of the self-intersection local times of a one-dimensional Brownian bridge. As a result we obtain a neat formula for the propagator with identical start and end point. Thus, we obtain a well-defined mathematical object which is used to calculate the density of states [see, e.g., F. Edwards and Y. B. Gulyaev, “The density of states of a highly impure semiconductor,” Proc. Phys. Soc. 83, 495–496 (1964)].« less
Li, Yan; Hughes, Jan N.; Kwok, Oi-man; Hsu, Hsien-Yuan
2012-01-01
This study investigated the construct validity of measures of teacher-student support in a sample of 709 ethnically diverse second and third grade academically at-risk students. Confirmatory factor analysis investigated the convergent and discriminant validities of teacher, child, and peer reports of teacher-student support and child conduct problems. Results supported the convergent and discriminant validity of scores on the measures. Peer reports accounted for the largest proportion of trait variance and non-significant method variance. Child reports accounted for the smallest proportion of trait variance and the largest method variance. A model with two latent factors provided a better fit to the data than a model with one factor, providing further evidence of the discriminant validity of measures of teacher-student support. Implications for research, policy, and practice are discussed. PMID:21767024
Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin
2011-03-01
Noise characterization through estimation of the noise power spectrum (NPS) is a central component of the evaluation of digital x-ray systems. Extensive works have been conducted to achieve accurate and precise measurement of NPS. One approach to improve the accuracy of the NPS measurement is to reduce the statistical variance of the NPS results by involving more data samples. However, this method is based on the assumption that the noise in a radiographic image is arising from stochastic processes. In the practical data, the artifactuals always superimpose on the stochastic noise as low-frequency background trends and prevent us from achieving accurate NPS. The purpose of this study was to investigate an appropriate background detrending technique to improve the accuracy of NPS estimation for digital x-ray systems. In order to achieve the optimal background detrending technique for NPS estimate, four methods for artifactuals removal were quantitatively studied and compared: (1) Subtraction of a low-pass-filtered version of the image, (2) subtraction of a 2-D first-order fit to the image, (3) subtraction of a 2-D second-order polynomial fit to the image, and (4) subtracting two uniform exposure images. In addition, background trend removal was separately applied within original region of interest or its partitioned sub-blocks for all four methods. The performance of background detrending techniques was compared according to the statistical variance of the NPS results and low-frequency systematic rise suppression. Among four methods, subtraction of a 2-D second-order polynomial fit to the image was most effective in low-frequency systematic rise suppression and variances reduction for NPS estimate according to the authors' digital x-ray system. Subtraction of a low-pass-filtered version of the image led to NPS variance increment above low-frequency components because of the side lobe effects of frequency response of the boxcar filtering function. Subtracting two uniform exposure images obtained the worst result on the smoothness of NPS curve, although it was effective in low-frequency systematic rise suppression. Subtraction of a 2-D first-order fit to the image was also identified effective for background detrending, but it was worse than subtraction of a 2-D second-order polynomial fit to the image according to the authors' digital x-ray system. As a result of this study, the authors verified that it is necessary and feasible to get better NPS estimate by appropriate background trend removal. Subtraction of a 2-D second-order polynomial fit to the image was the most appropriate technique for background detrending without consideration of processing time.
Evaluation of Magnetic Diagnostics for MHD Equilibrium Reconstruction of LHD Discharges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sontag, Aaron C; Hanson, James D.; Lazerson, Sam
2011-01-01
Equilibrium reconstruction is the process of determining the set of parameters of an MHD equilibrium that minimize the difference between expected and experimentally observed signals. This is routinely performed in axisymmetric devices, such as tokamaks, and the reconstructed equilibrium solution is then the basis for analysis of stability and transport properties. The V3FIT code [1] has been developed to perform equilibrium reconstruction in cases where axisymmetry cannot be assumed, such as in stellarators. The present work is focused on using V3FIT to analyze plasmas in the Large Helical Device (LHD) [2], a superconducting, heliotron type device with over 25 MWmore » of heating power that is capable of achieving both high-beta ({approx}5%) and high density (>1 x 10{sup 21}/m{sup 3}). This high performance as well as the ability to drive tens of kiloamperes of toroidal plasma current leads to deviations in the equilibrium state from the vacuum flux surfaces. This initial study examines the effectiveness of using magnetic diagnostics as the observed signals in reconstructing experimental plasma parameters for LHD discharges. V3FIT uses the VMEC [3] 3D equilibrium solver to calculate an initial equilibrium solution with closed, nested flux surfaces based on user specified plasma parameters. This equilibrium solution is then used to calculate the expected signals for specified diagnostics. The differences between these expected signal values and the observed values provides a starting {chi}{sup 2} value. V3FIT then varies all of the fit parameters independently, calculating a new equilibrium and corresponding {chi}{sup 2} for each variation. A quasi-Newton algorithm [1] is used to find the path in parameter space that leads to a minimum in {chi}{sup 2}. Effective diagnostic signals must vary in a predictable manner with the variations of the plasma parameters and this signal variation must be of sufficient amplitude to be resolved from the signal noise. Signal effectiveness can be defined for a specific signal and specific reconstruction parameter as the dimensionless fractional reduction in the posterior parameter variance with respect to the signal variance. Here, {sigma}{sub i}{sup sig} is the variance of the ith signal and {sigma}{sub j}{sup param} param is the posterior variance of the jth fit parameter. The sum of all signal effectiveness values for a given reconstruction parameter is normalized to one. This quantity will be used to determine signal effectiveness for various reconstruction cases. The next section will examine the variation of the expected signals with changes in plasma pressure and the following section will show results for reconstructing model plasmas using these signals.« less
The association between motor skill competence and physical fitness in young adults.
Stodden, David; Langendorfer, Stephen; Roberton, Mary Ann
2009-06-01
We examined the relationship between competence in three fundamental motor skills (throwing kicking, and jumping) and six measures of health-related physical fitness in young adults (ages 18-25). We assessed motor skill competence using product scores of maximum kicking and throwing speed and maximum jumping distance. A factor analysis indicated the 12-min run/walk, percent body fat, curl-ups, grip strength, and maximum leg press strength all loaded on one factor defining the construct of "overall fitness. "Multiple regression analyses indicated that the product scores for jumping (74%), kicking (58%), and throwing (59%) predicted 79% of the variance in overall fitness. Gender was not a significant predictor of fitness. Results suggest that developing motor skill competence may be fundamental in developing and maintaining adequate physical fitness into adulthood. These data represent the strongest to date on the relationship between motor skill competence and physical fitness.
Optimization of data analysis for the in vivo neutron activation analysis of aluminum in bone.
Mohseni, H K; Matysiak, W; Chettle, D R; Byun, S H; Priest, N; Atanackovic, J; Prestwich, W V
2016-10-01
An existing system at McMaster University has been used for the in vivo measurement of aluminum in human bone. Precise and detailed analysis approaches are necessary to determine the aluminum concentration because of the low levels of aluminum found in the bone and the challenges associated with its detection. Phantoms resembling the composition of the human hand with varying concentrations of aluminum were made for testing the system prior to the application to human studies. A spectral decomposition model and a photopeak fitting model involving the inverse-variance weighted mean and a time-dependent analysis were explored to analyze the results and determine the model with the best performance and lowest minimum detection limit. The results showed that the spectral decomposition and the photopeak fitting model with the inverse-variance weighted mean both provided better results compared to the other methods tested. The spectral decomposition method resulted in a marginally lower detection limit (5μg Al/g Ca) compared to the inverse-variance weighted mean (5.2μg Al/g Ca), rendering both equally applicable to human measurements. Copyright © 2016 Elsevier Ltd. All rights reserved.
English, Sinead; Huchard, Elise; Nielsen, Johanna F; Clutton-Brock, Tim H
2013-01-01
In polygynous species, variance in reproductive success is higher in males than females. There is consequently stronger selection for competitive traits in males and early growth can have a greater influence on later fitness in males than in females. As yet, little is known about sex differences in the effect of early growth on subsequent breeding success in species where variance in reproductive success is higher in females than males, and competitive traits are under stronger selection in females. Greater variance in reproductive success has been documented in several singular cooperative breeders. Here, we investigated consequences of early growth for later reproductive success in wild meerkats. We found that, despite the absence of dimorphism, females who exhibited faster growth until nutritional independence were more likely to become dominant, whereas early growth did not affect dominance acquisition in males. Among those individuals who attained dominance, there was no further influence of early growth on dominance tenure or lifetime reproductive success in males or females. These findings suggest that early growth effects on competitive abilities and fitness may reflect the intensity of intrasexual competition even in sexually monomorphic species. PMID:24340181
Sex recombination, and reproductive fitness: an experimental study using Paramecium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nyberg, D.
1982-08-01
The effect of sex and recombination on reproductive fitness are measured using five wild stocks of Paramecium primaurelia. Among the wild stocks there were highly significant differences in growth rates. No hybrid had as low a fitness as the least fit parental stock. Recombination produced genotypes of higher fitness than those of either parent only in the cross between the two stocks of lowest fitness. The increase in variance of fitness as a result of recombination was almost exclusively attributable to the generation lines with low fitness. The fitness consequences of sexuality and mate choice were stock specific; some individualsmore » leaving the most descendants by inbreeding, others by outcrossing. For most crosses the short-term advantage of sex, if any, accrue from the fusion of different gametes (hybrid vigor) and not from recombination. Since the homozygous genotype with the highest fitnes left the most progeny by inbreeding (no recombination), the persistence of conjugation in P. primaurelia is paradoxical. (JMT)« less
The Impact of Prior Deployment Experience on Civilian Employment After Military Service
2013-03-21
covariates men- tioned. Given the exploratory nature of this study, all defined variables were included. Model diagnostic tests were conducted and we...assessed model fit using the Hosmer–Lemeshow goodness-of-fit test . To identify the existence of collinearity, we examined all variance inflation factors...separation, and reason for separation and service branch were tested . Both interactions were significant at pɘ.10. Three models were built to examine
Convex Regression with Interpretable Sharp Partitions
Petersen, Ashley; Simon, Noah; Witten, Daniela
2016-01-01
We consider the problem of predicting an outcome variable on the basis of a small number of covariates, using an interpretable yet non-additive model. We propose convex regression with interpretable sharp partitions (CRISP) for this task. CRISP partitions the covariate space into blocks in a data-adaptive way, and fits a mean model within each block. Unlike other partitioning methods, CRISP is fit using a non-greedy approach by solving a convex optimization problem, resulting in low-variance fits. We explore the properties of CRISP, and evaluate its performance in a simulation study and on a housing price data set. PMID:27635120
Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli
2014-01-01
The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike's Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size.
Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli
2014-01-01
The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike’s Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size. PMID:24671204
Canivez, Gary L; Watkins, Marley W; Dombrowski, Stefan C
2017-04-01
The factor structure of the Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V; Wechsler, 2014a) standardization sample (N = 2,200) was examined using confirmatory factor analyses (CFA) with maximum likelihood estimation for all reported models from the WISC-V Technical and Interpretation Manual (Wechsler, 2014b). Additionally, alternative bifactor models were examined and variance estimates and model-based reliability estimates (ω coefficients) were provided. Results from analyses of the 16 primary and secondary WISC-V subtests found that all higher-order CFA models with 5 group factors (VC, VS, FR, WM, and PS) produced model specification errors where the Fluid Reasoning factor produced negative variance and were thus judged inadequate. Of the 16 models tested, the bifactor model containing 4 group factors (VC, PR, WM, and PS) produced the best fit. Results from analyses of the 10 primary WISC-V subtests also found the bifactor model with 4 group factors (VC, PR, WM, and PS) produced the best fit. Variance estimates from both 16 and 10 subtest based bifactor models found dominance of general intelligence (g) in accounting for subtest variance (except for PS subtests) and large ω-hierarchical coefficients supporting general intelligence interpretation. The small portions of variance uniquely captured by the 4 group factors and low ω-hierarchical subscale coefficients likely render the group factors of questionable interpretive value independent of g (except perhaps for PS). Present CFA results confirm the EFA results reported by Canivez, Watkins, and Dombrowski (2015); Dombrowski, Canivez, Watkins, and Beaujean (2015); and Canivez, Dombrowski, and Watkins (2015). (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Rathouz, Paul J.; Van Hulle, Carol A.; Lee Rodgers, Joseph; Waldman, Irwin D.; Lahey, Benjamin B.
2009-01-01
Purcell (2002) proposed a bivariate biometric model for testing and quantifying the interaction between latent genetic influences and measured environments in the presence of gene-environment correlation. Purcell’s model extends the Cholesky model to include gene-environment interaction. We examine a number of closely-related alternative models that do not involve gene-environment interaction but which may fit the data as well Purcell’s model. Because failure to consider these alternatives could lead to spurious detection of gene-environment interaction, we propose alternative models for testing gene-environment interaction in the presence of gene-environment correlation, including one based on the correlated factors model. In addition, we note mathematical errors in the calculation of effect size via variance components in Purcell’s model. We propose a statistical method for deriving and interpreting variance decompositions that are true to the fitted model. PMID:18293078
Correlation energy for elementary bosons: Physics of the singularity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shiau, Shiue-Yuan, E-mail: syshiau@mail.ncku.edu.tw; Combescot, Monique; Chang, Yia-Chung, E-mail: yiachang@gate.sinica.edu.tw
2016-04-15
We propose a compact perturbative approach that reveals the physical origin of the singularity occurring in the density dependence of correlation energy: like fermions, elementary bosons have a singular correlation energy which comes from the accumulation, through Feynman “bubble” diagrams, of the same non-zero momentum transfer excitations from the free particle ground state, that is, the Fermi sea for fermions and the Bose–Einstein condensate for bosons. This understanding paves the way toward deriving the correlation energy of composite bosons like atomic dimers and semiconductor excitons, by suggesting Shiva diagrams that have similarity with Feynman “bubble” diagrams, the previous elementary bosonmore » approaches, which hide this physics, being inappropriate to do so.« less
Hopf algebras of rooted forests, cocyles, and free Rota-Baxter algebras
NASA Astrophysics Data System (ADS)
Zhang, Tianjie; Gao, Xing; Guo, Li
2016-10-01
The Hopf algebra and the Rota-Baxter algebra are the two algebraic structures underlying the algebraic approach of Connes and Kreimer to renormalization of perturbative quantum field theory. In particular, the Hopf algebra of rooted trees serves as the "baby model" of Feynman graphs in their approach and can be characterized by certain universal properties involving a Hochschild 1-cocycle. Decorated rooted trees have also been applied to study Feynman graphs. We will continue the study of universal properties of various spaces of decorated rooted trees with such a 1-cocycle, leading to the concept of a cocycle Hopf algebra. We further apply the universal properties to equip a free Rota-Baxter algebra with the structure of a cocycle Hopf algebra.
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.
2007-01-01
European options on coupon bonds are studied in a quantum field theory model of forward interest rates. Swaptions are briefly reviewed. An approximation scheme for the coupon bond option price is developed based on the fact that the volatility of the forward interest rates is a small quantity. The field theory for the forward interest rates is Gaussian, but when the payoff function for the coupon bond option is included it makes the field theory nonlocal and nonlinear. A perturbation expansion using Feynman diagrams gives a closed form approximation for the price of coupon bond option. A special case of the approximate bond option is shown to yield the industry standard one-factor HJM formula with exponential volatility.
Solution of a cauchy problem for a diffusion equation in a Hilbert space by a Feynman formula
NASA Astrophysics Data System (ADS)
Remizov, I. D.
2012-07-01
The Cauchy problem for a class of diffusion equations in a Hilbert space is studied. It is proved that the Cauchy problem in well posed in the class of uniform limits of infinitely smooth bounded cylindrical functions on the Hilbert space, and the solution is presented in the form of the so-called Feynman formula, i.e., a limit of multiple integrals against a gaussian measure as the multiplicity tends to infinity. It is also proved that the solution of the Cauchy problem depends continuously on the diffusion coefficient. A process reducing an approximate solution of an infinite-dimensional diffusion equation to finding a multiple integral of a real function of finitely many real variables is indicated.
Feynman propagator for spin foam quantum gravity.
Oriti, Daniele
2005-03-25
We link the notion causality with the orientation of the spin foam 2-complex. We show that all current spin foam models are orientation independent. Using the technology of evolution kernels for quantum fields on Lie groups, we construct a generalized version of spin foam models, introducing an extra proper time variable. We prove that different ranges of integration for this variable lead to different classes of spin foam models: the usual ones, interpreted as the quantum gravity analogue of the Hadamard function of quantum field theory (QFT) or as inner products between quantum gravity states; and a new class of causal models, the quantum gravity analogue of the Feynman propagator in QFT, nontrivial function of the orientation data, and implying a notion of "timeless ordering".
Baaquie, Belal E
2007-01-01
European options on coupon bonds are studied in a quantum field theory model of forward interest rates. Swaptions are briefly reviewed. An approximation scheme for the coupon bond option price is developed based on the fact that the volatility of the forward interest rates is a small quantity. The field theory for the forward interest rates is Gaussian, but when the payoff function for the coupon bond option is included it makes the field theory nonlocal and nonlinear. A perturbation expansion using Feynman diagrams gives a closed form approximation for the price of coupon bond option. A special case of the approximate bond option is shown to yield the industry standard one-factor HJM formula with exponential volatility.
Eggert, Thomas; Straube, Andreas
2016-01-01
This study investigates the inter-trial variability of saccade trajectories observed in five rhesus macaques (Macaca mulatta). For each time point during a saccade, the inter-trial variance of eye position and its covariance with eye end position were evaluated. Data were modeled by a superposition of three noise components due to 1) planning noise, 2) signal-dependent motor noise, and 3) signal-dependent premotor noise entering within an internal feedback loop. Both planning noise and signal-dependent motor noise (together called accumulating noise) predict a simple S-shaped variance increase during saccades, which was not sufficient to explain the data. Adding noise within an internal feedback loop enabled the model to mimic variance/covariance structure in each monkey, and to estimate the noise amplitudes and the feedback gain. Feedback noise had little effect on end point noise, which was dominated by accumulating noise. This analysis was further extended to saccades executed during inactivation of the caudal fastigial nucleus (cFN) on one side of the cerebellum. Saccades ipsiversive to an inactivated cFN showed more end point variance than did normal saccades. During cFN inactivation, eye position during saccades was statistically more strongly coupled to eye position at saccade end. The proposed model could fit the variance/covariance structure of ipsiversive and contraversive saccades. Inactivation effects on saccade noise are explained by a decrease of the feedback gain and an increase of planning and/or signal-dependent motor noise. The decrease of the fitted feedback gain is consistent with previous studies suggesting a role for the cerebellum in an internal feedback mechanism. Increased end point variance did not result from impaired feedback but from the increase of accumulating noise. The effects of cFN inactivation on saccade noise indicate that the effects of cFN inactivation cannot be explained entirely with the cFN’s direct connections to the saccade-related premotor centers in the brainstem. PMID:27351741
Genetic parameters of legendre polynomials for first parity lactation curves.
Pool, M H; Janss, L L; Meuwissen, T H
2000-11-01
Variance components of the covariance function coefficients in a random regression test-day model were estimated by Legendre polynomials up to a fifth order for first-parity records of Dutch dairy cows using Gibbs sampling. Two Legendre polynomials of equal order were used to model the random part of the lactation curve, one for the genetic component and one for permanent environment. Test-day records from cows registered between 1990 to 1996 and collected by regular milk recording were available. For the data set, 23,700 complete lactations were selected from 475 herds sired by 262 sires. Because the application of a random regression model is limited by computing capacity, we investigated the minimum order needed to fit the variance structure in the data sufficiently. Predictions of genetic and permanent environmental variance structures were compared with bivariate estimates on 30-d intervals. A third-order or higher polynomial modeled the shape of variance curves over DIM with sufficient accuracy for the genetic and permanent environment part. Also, the genetic correlation structure was fitted with sufficient accuracy by a third-order polynomial, but, for the permanent environmental component, a fourth order was needed. Because equal orders are suggested in the literature, a fourth-order Legendre polynomial is recommended in this study. However, a rank of three for the genetic covariance matrix and of four for permanent environment allows a simpler covariance function with a reduced number of parameters based on the eigenvalues and eigenvectors.
Bijma, Piter
2011-01-01
Genetic selection is a major force shaping life on earth. In classical genetic theory, response to selection is the product of the strength of selection and the additive genetic variance in a trait. The additive genetic variance reflects a population’s intrinsic potential to respond to selection. The ordinary additive genetic variance, however, ignores the social organization of life. With social interactions among individuals, individual trait values may depend on genes in others, a phenomenon known as indirect genetic effects. Models accounting for indirect genetic effects, however, lack a general definition of heritable variation. Here I propose a general definition of the heritable variation that determines the potential of a population to respond to selection. This generalizes the concept of heritable variance to any inheritance model and level of organization. The result shows that heritable variance determining potential response to selection is the variance among individuals in the heritable quantity that determines the population mean trait value, rather than the usual additive genetic component of phenotypic variance. It follows, therefore, that heritable variance may exceed phenotypic variance among individuals, which is impossible in classical theory. This work also provides a measure of the utilization of heritable variation for response to selection and integrates two well-known models of maternal genetic effects. The result shows that relatedness between the focal individual and the individuals affecting its fitness is a key determinant of the utilization of heritable variance for response to selection. PMID:21926298
Bijma, Piter
2011-12-01
Genetic selection is a major force shaping life on earth. In classical genetic theory, response to selection is the product of the strength of selection and the additive genetic variance in a trait. The additive genetic variance reflects a population's intrinsic potential to respond to selection. The ordinary additive genetic variance, however, ignores the social organization of life. With social interactions among individuals, individual trait values may depend on genes in others, a phenomenon known as indirect genetic effects. Models accounting for indirect genetic effects, however, lack a general definition of heritable variation. Here I propose a general definition of the heritable variation that determines the potential of a population to respond to selection. This generalizes the concept of heritable variance to any inheritance model and level of organization. The result shows that heritable variance determining potential response to selection is the variance among individuals in the heritable quantity that determines the population mean trait value, rather than the usual additive genetic component of phenotypic variance. It follows, therefore, that heritable variance may exceed phenotypic variance among individuals, which is impossible in classical theory. This work also provides a measure of the utilization of heritable variation for response to selection and integrates two well-known models of maternal genetic effects. The result shows that relatedness between the focal individual and the individuals affecting its fitness is a key determinant of the utilization of heritable variance for response to selection.
Sex versus asex: An analysis of the role of variance conversion.
Lewis-Pye, Andrew E M; Montalbán, Antonio
2017-04-01
The question as to why most complex organisms reproduce sexually remains a very active research area in evolutionary biology. Theories dating back to Weismann have suggested that the key may lie in the creation of increased variability in offspring, causing enhanced response to selection. Under appropriate conditions, selection is known to result in the generation of negative linkage disequilibrium, with the effect of recombination then being to increase genetic variance by reducing these negative associations between alleles. It has therefore been a matter of significant interest to understand precisely those conditions resulting in negative linkage disequilibrium, and to recognise also the conditions in which the corresponding increase in genetic variation will be advantageous. Here, we prove rigorous results for the multi-locus case, detailing the build up of negative linkage disequilibrium, and describing the long term effect on population fitness for models with and without bounds on fitness contributions from individual alleles. Under the assumption of large but finite bounds on fitness contributions from alleles, the non-linear nature of the effect of recombination on a population presents serious obstacles in finding the genetic composition of populations at equilibrium, and in establishing convergence to those equilibria. We describe techniques for analysing the long term behaviour of sexual and asexual populations for such models, and use these techniques to establish conditions resulting in higher fitnesses for sexually reproducing populations. Copyright © 2017 Elsevier Inc. All rights reserved.
Effect of various putty-wash impression techniques on marginal fit of cast crowns.
Nissan, Joseph; Rosner, Ofir; Bukhari, Mohammed Amin; Ghelfan, Oded; Pilo, Raphael
2013-01-01
Marginal fit is an important clinical factor that affects restoration longevity. The accuracy of three polyvinyl siloxane putty-wash impression techniques was compared by marginal fit assessment using the nondestructive method. A stainless steel master cast containing three abutments with three metal crowns matching the three preparations was used to make 45 impressions: group A = single-step technique (putty and wash impression materials used simultaneously), group B = two-step technique with a 2-mm relief (putty as a preliminary impression to create a 2-mm wash space followed by the wash stage), and group C = two-step technique with a polyethylene spacer (plastic spacer used with the putty impression followed by the wash stage). Accuracy was assessed using a toolmaker microscope to measure and compare the marginal gaps between each crown and finish line on the duplicated stone casts. Each abutment was further measured at the mesial, buccal, and distal aspects. One-way analysis of variance was used for statistical analysis. P values and Scheffe post hoc contrasts were calculated. Significance was determined at .05. One-way analysis of variance showed significant differences among the three impression techniques in all three abutments and at all three locations (P < .001). Group B yielded dies with minimal gaps compared to groups A and C. The two-step impression technique with 2-mm relief was the most accurate regarding the crucial clinical factor of marginal fit.
Physical fitness level affects perception of chronic stress in military trainees.
Tuch, Carolin; Teubel, Thomas; La Marca, Roberto; Roos, Lilian; Annen, Hubert; Wyss, Thomas
2017-12-01
This study investigated whether physical fitness affects the perception of chronic stress in military trainees while controlling for established factors influencing stress perception. The sample consisted of 273 men (20.23 ± 1.12 years, 73.56 ± 10.52 kg, 1.78 ± 0.06 m). Physical fitness was measured by progressive endurance run (maximum oxygen uptake; VO 2 max), standing long jump, seated shot put, trunk muscle strength, and one leg standing test. Perceived stress was measured using the Perceived Stress Questionnaire in Weeks 1 and 11 of basic military training (BMT). VO 2 max and four influencing variables (perceived stress in Week 1, neuroticism, transformational leadership style, and education level) explained 44.44% of the variance of the increase in perceived stress during 10 weeks of BMT (R 2 = 0.444, F = 23.334, p < .001). The explained variance of VO 2 max was 4.14% (R 2 = 0.041), with a Cohen's f 2 effect size of 0.045 (assigned as a small effect by Cohen, ). The results indicate a moderating influence of good aerobic fitness on the varied level of perceived stress. We conclude that it is advisable to provide conscripts with a specific endurance training program prior to BMT for stress prevention reasons. Copyright © 2016 John Wiley & Sons, Ltd.
Heidaritabar, M; Wolc, A; Arango, J; Zeng, J; Settar, P; Fulton, J E; O'Sullivan, N P; Bastiaansen, J W M; Fernando, R L; Garrick, D J; Dekkers, J C M
2016-10-01
Most genomic prediction studies fit only additive effects in models to estimate genomic breeding values (GEBV). However, if dominance genetic effects are an important source of variation for complex traits, accounting for them may improve the accuracy of GEBV. We investigated the effect of fitting dominance and additive effects on the accuracy of GEBV for eight egg production and quality traits in a purebred line of brown layers using pedigree or genomic information (42K single-nucleotide polymorphism (SNP) panel). Phenotypes were corrected for the effect of hatch date. Additive and dominance genetic variances were estimated using genomic-based [genomic best linear unbiased prediction (GBLUP)-REML and BayesC] and pedigree-based (PBLUP-REML) methods. Breeding values were predicted using a model that included both additive and dominance effects and a model that included only additive effects. The reference population consisted of approximately 1800 animals hatched between 2004 and 2009, while approximately 300 young animals hatched in 2010 were used for validation. Accuracy of prediction was computed as the correlation between phenotypes and estimated breeding values of the validation animals divided by the square root of the estimate of heritability in the whole population. The proportion of dominance variance to total phenotypic variance ranged from 0.03 to 0.22 with PBLUP-REML across traits, from 0 to 0.03 with GBLUP-REML and from 0.01 to 0.05 with BayesC. Accuracies of GEBV ranged from 0.28 to 0.60 across traits. Inclusion of dominance effects did not improve the accuracy of GEBV, and differences in their accuracies between genomic-based methods were small (0.01-0.05), with GBLUP-REML yielding higher prediction accuracies than BayesC for egg production, egg colour and yolk weight, while BayesC yielded higher accuracies than GBLUP-REML for the other traits. In conclusion, fitting dominance effects did not impact accuracy of genomic prediction of breeding values in this population. © 2016 Blackwell Verlag GmbH.
Genetic Diversity in the Interference Selection Limit
Good, Benjamin H.; Walczak, Aleksandra M.; Neher, Richard A.; Desai, Michael M.
2014-01-01
Pervasive natural selection can strongly influence observed patterns of genetic variation, but these effects remain poorly understood when multiple selected variants segregate in nearby regions of the genome. Classical population genetics fails to account for interference between linked mutations, which grows increasingly severe as the density of selected polymorphisms increases. Here, we describe a simple limit that emerges when interference is common, in which the fitness effects of individual mutations play a relatively minor role. Instead, similar to models of quantitative genetics, molecular evolution is determined by the variance in fitness within the population, defined over an effectively asexual segment of the genome (a “linkage block”). We exploit this insensitivity in a new “coarse-grained” coalescent framework, which approximates the effects of many weakly selected mutations with a smaller number of strongly selected mutations that create the same variance in fitness. This approximation generates accurate and efficient predictions for silent site variability when interference is common. However, these results suggest that there is reduced power to resolve individual selection pressures when interference is sufficiently widespread, since a broad range of parameters possess nearly identical patterns of silent site variability. PMID:24675740
Chiral limit of N = 4 SYM and ABJM and integrable Feynman graphs
NASA Astrophysics Data System (ADS)
Caetano, João; Gürdoğan, Ömer; Kazakov, Vladimir
2018-03-01
We consider a special double scaling limit, recently introduced by two of the authors, combining weak coupling and large imaginary twist, for the γ-twisted N = 4 SYM theory. We also establish the analogous limit for ABJM theory. The resulting non-gauge chiral 4D and 3D theories of interacting scalars and fermions are integrable in the planar limit. In spite of the breakdown of conformality by double-trace interactions, most of the correlators for local operators of these theories are conformal, with non-trivial anomalous dimensions defined by specific, integrable Feynman diagrams. We discuss the details of this diagrammatics. We construct the doubly-scaled asymptotic Bethe ansatz (ABA) equations for multi-magnon states in these theories. Each entry of the mixing matrix of local conformal operators in the simplest of these theories — the bi-scalar model in 4D and tri-scalar model in 3D — is given by a single Feynman diagram at any given loop order. The related diagrams are in principle computable, up to a few scheme dependent constants, by integrability methods (quantum spectral curve or ABA). These constants should be fixed from direct computations of a few simplest graphs. This integrability-based method is advocated to be able to provide information about some high loop order graphs which are hardly computable by other known methods. We exemplify our approach with specific five-loop graphs.
An approach toward the numerical evaluation of multi-loop Feynman diagrams
NASA Astrophysics Data System (ADS)
Passarino, Giampiero
2001-12-01
A scheme for systematically achieving accurate numerical evaluation of multi-loop Feynman diagrams is developed. This shows the feasibility of a project aimed to produce a complete calculation for two-loop predictions in the Standard Model. As a first step an algorithm, proposed by F.V. Tkachov and based on the so-called generalized Bernstein functional relation, is applied to one-loop multi-leg diagrams with particular emphasis to the presence of infrared singularities, to the problem of tensorial reduction and to the classification of all singularities of a given diagram. Successively, the extension of the algorithm to two-loop diagrams is examined. The proposed solution consists in applying the functional relation to the one-loop sub-diagram which has the largest number of internal lines. In this way the integrand can be made smooth, a part from a factor which is a polynomial in xS, the vector of Feynman parameters needed for the complementary sub-diagram with the smallest number of internal lines. Since the procedure does not introduce new singularities one can distort the xS-integration hyper-contour into the complex hyper-plane, thus achieving numerical stability. The algorithm is then modified to deal with numerical evaluation around normal thresholds. Concise and practical formulas are assembled and presented, numerical results and comparisons with the available literature are shown and discussed for the so-called sunset topology.
Reduze - Feynman integral reduction in C++
NASA Astrophysics Data System (ADS)
Studerus, C.
2010-07-01
Reduze is a computer program for reducing Feynman integrals to master integrals employing a Laporta algorithm. The program is written in C++ and uses classes provided by the GiNaC library to perform the simplifications of the algebraic prefactors in the system of equations. Reduze offers the possibility to run reductions in parallel. Program summaryProgram title:Reduze Catalogue identifier: AEGE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:: yes No. of lines in distributed program, including test data, etc.: 55 433 No. of bytes in distributed program, including test data, etc.: 554 866 Distribution format: tar.gz Programming language: C++ Computer: All Operating system: Unix/Linux Number of processors used: The number of processors is problem dependent. More than one possible but not arbitrary many. RAM: Depends on the complexity of the system. Classification: 4.4, 5 External routines: CLN ( http://www.ginac.de/CLN/), GiNaC ( http://www.ginac.de/) Nature of problem: Solving large systems of linear equations with Feynman integrals as unknowns and rational polynomials as prefactors. Solution method: Using a Gauss/Laporta algorithm to solve the system of equations. Restrictions: Limitations depend on the complexity of the system (number of equations, number of kinematic invariants). Running time: Depends on the complexity of the system.
Measuring kinetics of complex single ion channel data using mean-variance histograms.
Patlak, J B
1993-07-01
The measurement of single ion channel kinetics is difficult when those channels exhibit subconductance events. When the kinetics are fast, and when the current magnitudes are small, as is the case for Na+, Ca2+, and some K+ channels, these difficulties can lead to serious errors in the estimation of channel kinetics. I present here a method, based on the construction and analysis of mean-variance histograms, that can overcome these problems. A mean-variance histogram is constructed by calculating the mean current and the current variance within a brief "window" (a set of N consecutive data samples) superimposed on the digitized raw channel data. Systematic movement of this window over the data produces large numbers of mean-variance pairs which can be assembled into a two-dimensional histogram. Defined current levels (open, closed, or sublevel) appear in such plots as low variance regions. The total number of events in such low variance regions is estimated by curve fitting and plotted as a function of window width. This function decreases with the same time constants as the original dwell time probability distribution for each of the regions. The method can therefore be used: 1) to present a qualitative summary of the single channel data from which the signal-to-noise ratio, open channel noise, steadiness of the baseline, and number of conductance levels can be quickly determined; 2) to quantify the dwell time distribution in each of the levels exhibited. In this paper I present the analysis of a Na+ channel recording that had a number of complexities. The signal-to-noise ratio was only about 8 for the main open state, open channel noise, and fast flickers to other states were present, as were a substantial number of subconductance states. "Standard" half-amplitude threshold analysis of these data produce open and closed time histograms that were well fitted by the sum of two exponentials, but with apparently erroneous time constants, whereas the mean-variance histogram technique provided a more credible analysis of the open, closed, and subconductance times for the patch. I also show that the method produces accurate results on simulated data in a wide variety of conditions, whereas the half-amplitude method, when applied to complex simulated data shows the same errors as were apparent in the real data. The utility and the limitations of this new method are discussed.
Self-Efficacy and green entrepreneurship
NASA Astrophysics Data System (ADS)
Tan, K. L.; Suhaida, S.; Leong, Y. P.
2013-06-01
The objective of this study is to investigate empirically the extent to which self-efficacy contributes to the development of green entrepreneurial intention. The measurement constructs of self-efficacy were classified into market opportunities, innovative environment, initiating relationships, defining purpose, coping with challenges, and developing human resources. The study comprises 252 usable convenient samples through structured questionnaires. The coefficient of determination R2 shows that the variance of intention to entrepreneurship is explained by the variance of the independent variables. It was also found that the model is fit for prediction.
Zhang, Tao; Xiang, Ping; Gu, Xiangli; Rose, Melanie
2016-06-01
The 2 × 2 achievement goal model, including the mastery-approach, mastery-avoidance, performance-approach, and performance-avoidance goal orientations, has recently been used to explain motivational outcomes in physical activity. This study attempted to examine the relationships among 2 × 2 achievement goal orientations, physical activity, and health-related quality of life (HRQOL) in college students. Participants were 325 students (130 men and 195 women; Mage = 21.4 years) enrolled in physical activity classes at a Southern university. They completed surveys validated in previous research assessing achievement goal orientations, physical activity, and HRQOL. Path analyses revealed a good fit between the model and data (root mean square error of approximation = .06; Comparative Fit Index = .99; Bentler-Bonett Nonnormed Fit Index = .98; Incremental Fit Index = .99), but the model explained small variances in the current study. Mastery-approach and performance-approach goal orientations only had low or no relationships with physical activity. Mastery-approach goal orientation and physical activity also had low positive relationships with HRQOL, but mastery-avoidance and performance-avoidance goal orientations had low negative relationships with HRQOL. The hypothesized mediational role of physical activity in the relationship between mastery-approach and performance-approach goal orientations and HRQOL was not supported in this study. Although the data fit the proposed model well, only small variance was explained by the model. The relationship between physical activity and HRQOL of the college students and other related correlates should be further studied.
Khatua, Pradip; Bansal, Bhavtosh; Shahar, Dan
2014-01-10
In a "thought experiment," now a classic in physics pedagogy, Feynman visualizes Young's double-slit interference experiment with electrons in magnetic field. He shows that the addition of an Aharonov-Bohm phase is equivalent to shifting the zero-field wave interference pattern by an angle expected from the Lorentz force calculation for classical particles. We have performed this experiment with one slit, instead of two, where ballistic electrons within two-dimensional electron gas diffract through a small orifice formed by a quantum point contact (QPC). As the QPC width is comparable to the electron wavelength, the observed intensity profile is further modulated by the transverse waveguide modes present at the injector QPC. Our experiments open the way to realizing diffraction-based ideas in mesoscopic physics.
Using an atom interferometer to take the Gedanken out of Feynman's Gedankenexperiment
NASA Astrophysics Data System (ADS)
Pritchard, David E.; Hammond, Troy D.; Lenef, Alan; Rubenstein, Richard A.; Smith, Edward T.; Chapman, Michael S.; Schmiedmayer, Jörg
1997-01-01
We give a description of two experiments performed in an atom interferometer at MIT. By scattering a single photon off of the atom as it passes through the interferometer, we perform a version of a classic gedankenexperiment, a demonstration of a Feynman light microscope. As path information about the atom is gained, contrast in the atom fringes (coherence) is lost. The lost coherence is then recovered by observing only atoms which scatter photons into a particular final direction. This paper reflects the main emphasis of D. E. Pritchard's talk at the RIS meeting. Information about other topics covered in that talk, as well as a review of all of the published work performed with the MIT atom/molecule interferometer, is available on the world wide web at http://coffee.mit.edu/.
Critical exponents for diluted resistor networks
NASA Astrophysics Data System (ADS)
Stenull, O.; Janssen, H. K.; Oerding, K.
1999-05-01
An approach by Stephen [Phys. Rev. B 17, 4444 (1978)] is used to investigate the critical properties of randomly diluted resistor networks near the percolation threshold by means of renormalized field theory. We reformulate an existing field theory by Harris and Lubensky [Phys. Rev. B 35, 6964 (1987)]. By a decomposition of the principal Feynman diagrams, we obtain diagrams which again can be interpreted as resistor networks. This interpretation provides for an alternative way of evaluating the Feynman diagrams for random resistor networks. We calculate the resistance crossover exponent φ up to second order in ɛ=6-d, where d is the spatial dimension. Our result φ=1+ɛ/42+4ɛ2/3087 verifies a previous calculation by Lubensky and Wang, which itself was based on the Potts-model formulation of the random resistor network.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Filinov, A.V.; Golubnychiy, V.O.; Bonitz, M.
Extending our previous work [A.V. Filinov et al., J. Phys. A 36, 5957 (2003)], we present a detailed discussion of accuracy and practical applications of finite-temperature pseudopotentials for two-component Coulomb systems. Different pseudopotentials are discussed: (i) the diagonal Kelbg potential, (ii) the off-diagonal Kelbg potential, (iii) the improved diagonal Kelbg potential, (iv) an effective potential obtained with the Feynman-Kleinert variational principle, and (v) the 'exact' quantum pair potential derived from the two-particle density matrix. For the improved diagonal Kelbg potential, a simple temperature-dependent fit is derived which accurately reproduces the 'exact' pair potential in the whole temperature range. The derivedmore » pseudopotentials are then used in path integral Monte Carlo and molecular-dynamics (MD) simulations to obtain thermodynamical properties of strongly coupled hydrogen. It is demonstrated that classical MD simulations with spin-dependent interaction potentials for the electrons allow for an accurate description of the internal energy of hydrogen in the difficult regime of partial ionization down to the temperatures of about 60 000 K. Finally, we point out an interesting relationship between the quantum potentials and the effective potentials used in density-functional theory.« less
Vitezica, Zulma G; Varona, Luis; Legarra, Andres
2013-12-01
Genomic evaluation models can fit additive and dominant SNP effects. Under quantitative genetics theory, additive or "breeding" values of individuals are generated by substitution effects, which involve both "biological" additive and dominant effects of the markers. Dominance deviations include only a portion of the biological dominant effects of the markers. Additive variance includes variation due to the additive and dominant effects of the markers. We describe a matrix of dominant genomic relationships across individuals, D, which is similar to the G matrix used in genomic best linear unbiased prediction. This matrix can be used in a mixed-model context for genomic evaluations or to estimate dominant and additive variances in the population. From the "genotypic" value of individuals, an alternative parameterization defines additive and dominance as the parts attributable to the additive and dominant effect of the markers. This approach underestimates the additive genetic variance and overestimates the dominance variance. Transforming the variances from one model into the other is trivial if the distribution of allelic frequencies is known. We illustrate these results with mouse data (four traits, 1884 mice, and 10,946 markers) and simulated data (2100 individuals and 10,000 markers). Variance components were estimated correctly in the model, considering breeding values and dominance deviations. For the model considering genotypic values, the inclusion of dominant effects biased the estimate of additive variance. Genomic models were more accurate for the estimation of variance components than their pedigree-based counterparts.
Teacher Burnout: A Comparison of Two Cultures Using Confirmatory Factor and Item Response Models
Denton, Ellen-ge; Chaplin, William F.; Wall, Melanie
2014-01-01
The present study addresses teacher burnout and in particular cultural differences and similarities in burnout. We used the Maslach Burnout Inventory Education Survey (MBI-ES) as the starting point for developing a latent model of burnout in two cultures; Jamaica W.I. teachers (N= 150) and New York City teachers (N= 150). We confirm a latent 3 factor structure, using a subset of the items from the MBI-ES that adequately fit both samples. We tested different degrees of measurement invariance (model fit statistics, scale reliabilities, residual variances, item thresholds, and total variance) to describe and compare cultural differences. Results indicate some differences between the samples at the structure and item levels. We found that factor variances were slightly higher in the New York City teacher sample. Emotional Exhaustion (EE) was a more informative construct for differentiating among teachers at moderate levels of burnout, as opposed to extreme high or low levels of burnout, in both cultures. In contrast, Depersonalization in the Workplace (DW) was more informative at the more extreme levels of burnout among both teacher samples. By studying the influence of culture on the experience of burnout we can further our understanding of burnout and potentially discover factors that might prevent burnout among primary and secondary school teachers. PMID:25729572
Poissant, Jocelyn; Wilson, Alastair J; Coltman, David W
2010-01-01
The independent evolution of the sexes may often be constrained if male and female homologous traits share a similar genetic architecture. Thus, cross-sex genetic covariance is assumed to play a key role in the evolution of sexual dimorphism (SD) with consequent impacts on sexual selection, population dynamics, and speciation processes. We compiled cross-sex genetic correlations (r(MF)) estimates from 114 sources to assess the extent to which the evolution of SD is typically constrained and test several specific hypotheses. First, we tested if r(MF) differed among trait types and especially between fitness components and other traits. We also tested the theoretical prediction of a negative relationship between r(MF) and SD based on the expectation that increases in SD should be facilitated by sex-specific genetic variance. We show that r(MF) is usually large and positive but that it is typically smaller for fitness components. This demonstrates that the evolution of SD is typically genetically constrained and that sex-specific selection coefficients may often be opposite in sign due to sub-optimal levels of SD. Most importantly, we confirm that sex-specific genetic variance is an important contributor to the evolution of SD by validating the prediction of a negative correlation between r(MF) and SD.
ALOHA: Automatic libraries of helicity amplitudes for Feynman diagram computations
NASA Astrophysics Data System (ADS)
de Aquino, Priscila; Link, William; Maltoni, Fabio; Mattelaer, Olivier; Stelzer, Tim
2012-10-01
We present an application that automatically writes the HELAS (HELicity Amplitude Subroutines) library corresponding to the Feynman rules of any quantum field theory Lagrangian. The code is written in Python and takes the Universal FeynRules Output (UFO) as an input. From this input it produces the complete set of routines, wave-functions and amplitudes, that are needed for the computation of Feynman diagrams at leading as well as at higher orders. The representation is language independent and currently it can output routines in Fortran, C++, and Python. A few sample applications implemented in the MADGRAPH 5 framework are presented. Program summary Program title: ALOHA Catalogue identifier: AEMS_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEMS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: http://www.opensource.org/licenses/UoI-NCSA.php No. of lines in distributed program, including test data, etc.: 6094320 No. of bytes in distributed program, including test data, etc.: 7479819 Distribution format: tar.gz Programming language: Python2.6 Computer: 32/64 bit Operating system: Linux/Mac/Windows RAM: 512 Mbytes Classification: 4.4, 11.6 Nature of problem: An effcient numerical evaluation of a squared matrix element can be done with the help of the helicity routines implemented in the HELAS library [1]. This static library contains a limited number of helicity functions and is therefore not always able to provide the needed routine in the presence of an arbitrary interaction. This program provides a way to automatically create the corresponding routines for any given model. Solution method: ALOHA takes the Feynman rules associated to the vertex obtained from the model information (in the UFO format [2]), and multiplies it by the different wavefunctions or propagators. As a result the analytical expression of the helicity routines is obtained. Subsequently, this expression is automatically written in the requested language (Python, Fortran or C++) Restrictions: The allowed fields are currently spin 0, 1/2, 1 and 2, and the propagators of these particles are canonical. Running time: A few seconds for the SM and the MSSM, and up to a few minutes for models with spin 2 particles. References: [1] Murayama, H. and Watanabe, I. and Hagiwara, K., HELAS: HELicity Amplitude Subroutines for Feynman diagram evaluations, KEK-91-11, (1992) http://www-lib.kek.jp/cgi-bin/img_index?199124011 [2] C. Degrande, C. Duhr, B. Fuks, D. Grellscheid, O. Mattelaer, et al., UFO— The Universal FeynRules Output, Comput. Phys. Commun. 183 (2012) 1201-1214. arXiv:1108.2040, doi:10.1016/j.cpc.2012.01.022.
Fuchsia : A tool for reducing differential equations for Feynman master integrals to epsilon form
NASA Astrophysics Data System (ADS)
Gituliar, Oleksandr; Magerya, Vitaly
2017-10-01
We present Fuchsia - an implementation of the Lee algorithm, which for a given system of ordinary differential equations with rational coefficients ∂x J(x , ɛ) = A(x , ɛ) J(x , ɛ) finds a basis transformation T(x , ɛ) , i.e., J(x , ɛ) = T(x , ɛ) J‧(x , ɛ) , such that the system turns into the epsilon form : ∂xJ‧(x , ɛ) = ɛ S(x) J‧(x , ɛ) , where S(x) is a Fuchsian matrix. A system of this form can be trivially solved in terms of polylogarithms as a Laurent series in the dimensional regulator ɛ. That makes the construction of the transformation T(x , ɛ) crucial for obtaining solutions of the initial system. In principle, Fuchsia can deal with any regular systems, however its primary task is to reduce differential equations for Feynman master integrals. It ensures that solutions contain only regular singularities due to the properties of Feynman integrals. Program Files doi:http://dx.doi.org/10.17632/zj6zn9vfkh.1 Licensing provisions: MIT Programming language:Python 2.7 Nature of problem: Feynman master integrals may be calculated from solutions of a linear system of differential equations with rational coefficients. Such a system can be easily solved as an ɛ-series when its epsilon form is known. Hence, a tool which is able to find the epsilon form transformations can be used to evaluate Feynman master integrals. Solution method: The solution method is based on the Lee algorithm (Lee, 2015) which consists of three main steps: fuchsification, normalization, and factorization. During the fuchsification step a given system of differential equations is transformed into the Fuchsian form with the help of the Moser method (Moser, 1959). Next, during the normalization step the system is transformed to the form where eigenvalues of all residues are proportional to the dimensional regulator ɛ. Finally, the system is factorized to the epsilon form by finding an unknown transformation which satisfies a system of linear equations. Additional comments including Restrictions and Unusual features: Systems of single-variable differential equations are considered. A system needs to be reducible to Fuchsian form and eigenvalues of its residues must be of the form n + m ɛ, where n is integer. Performance depends upon the input matrix, its size, number of singular points and their degrees. It takes around an hour to reduce an example 74 × 74 matrix with 20 singular points on a PC with a 1.7 GHz Intel Core i5 CPU. An additional slowdown is to be expected for matrices with complex and/or irrational singular point locations, as these are particularly difficult for symbolic algebra software to handle.
Nichols, J.D.; Pollock, K.H.
1983-01-01
Capture-recapture models can be used to estimate parameters of interest from paleobiological data when encouter probabilities are unknown and variable over time. These models also permit estimation of sampling variances and goodness-of-fit tests are available for assessing the fit of data to most models. The authors describe capture-recapture models which should be useful in paleobiological analyses and discuss the assumptions which underlie them. They illustrate these models with examples and discuss aspects of study design.
Fourth-order self-energy contribution to the two loop Lamb shift
NASA Astrophysics Data System (ADS)
Palur Mallampalli, Subrahmanyam
1998-11-01
The calculation of the two loop Lamb shift in hydrogenic ions involves the numerical evaluation of ten Feynman diagrams. In this thesis, four fourth-order Feynman diagrams including the pure self-energy contributions are evaluated using exact Dirac-Coulomb propagators, so that higher order binding corrections can be extracted by comparing with the known terms in the Z/alpha expansion. The entire calculation is performed in Feynman gauge. One of the vacuum polarization diagrams is evaluated in the Uehling approximation. At low Z, it is seen to be perturbative in Z/alpha, while new predictions for high Z are made. The calculation of the three self-energy diagrams is reorganized into four terms, which we call the PO, M, F and P terms. The PO term is separately gauge invariant while the latter three form a gauge invariant set. The PO term is shown to exhibit the most non-perturbative behavior yet encountered in QED at low Z, so much so that even at Z = 1, the complete result is of the opposite sign as that of the leading term in its Z/alpha expansion. At high Z, we agree with an earlier calculation. The analysis of ultraviolet divergences in the two loop self-energy is complicated by the presence of sub- divergences. All divergences except the self-mass are shown to cancel. The self-mass is then removed by a self- mass counterterm. Parts of the calculation are shown to contain reference state singularities, that finally cancel. A numerical regulator to handle these singularities is described. The M term, an ultraviolet finite quantity, is defined through a subtraction scheme in coordinate space. Being computationally intensive, it is evaluated only at high Z, specifically Z = 83 and 92. The F term involves the evaluation of several Feynman diagrams with free electron propagators. These are computed for a range of values of Z. The P term, also ultraviolet finite, involves Dirac- Coulomb propagators that are best defined in coordinate space, as well as functions associated with the one loop self-energy that are best defined in momentum space. Possible methods of evaluating the P term are discussed.
Haanstra, Tsjitske M.; Tilbury, Claire; Kamper, Steven J.; Tordoir, Rutger L.; Vliet Vlieland, Thea P. M.; Nelissen, Rob G. H. H.; Cuijpers, Pim; de Vet, Henrica C. W.; Dekker, Joost; Knol, Dirk L.; Ostelo, Raymond W.
2015-01-01
Objectives The constructs optimism, pessimism, hope, treatment credibility and treatment expectancy are associated with outcomes of medical treatment. While these constructs are grounded in different theoretical models, they nonetheless show some conceptual overlap. The purpose of this study was to examine whether currently available measurement instruments for these constructs capture the conceptual differences between these constructs within a treatment setting. Methods Patients undergoing Total Hip and Total Knee Arthroplasty (THA and TKA) (Total N = 361; 182 THA; 179 TKA), completed the Life Orientation Test-Revised for optimism and pessimism, the Hope Scale, the Credibility Expectancy Questionnaire for treatment credibility and treatment expectancy. Confirmatory factor analysis was used to examine whether the instruments measure distinct constructs. Four theory-driven models with one, two, four and five latent factors were evaluated using multiple fit indices and Δχ2 tests, followed by some posthoc models. Results The results of the theory driven confirmatory factor analysis showed that a five factor model in which all constructs loaded on separate factors yielded the most optimal and satisfactory fit. Posthoc, a bifactor model in which (besides the 5 separate factors) a general factor is hypothesized accounting for the commonality of the items showed a significantly better fit than the five factor model. All specific factors, except for the hope factor, showed to explain a substantial amount of variance beyond the general factor. Conclusion Based on our primary analyses we conclude that optimism, pessimism, hope, treatment credibility and treatment expectancy are distinguishable in THA and TKA patients. Postdoc, we determined that all constructs, except hope, showed substantial specific variance, while also sharing some general variance. PMID:26214176
Haanstra, Tsjitske M; Tilbury, Claire; Kamper, Steven J; Tordoir, Rutger L; Vliet Vlieland, Thea P M; Nelissen, Rob G H H; Cuijpers, Pim; de Vet, Henrica C W; Dekker, Joost; Knol, Dirk L; Ostelo, Raymond W
2015-01-01
The constructs optimism, pessimism, hope, treatment credibility and treatment expectancy are associated with outcomes of medical treatment. While these constructs are grounded in different theoretical models, they nonetheless show some conceptual overlap. The purpose of this study was to examine whether currently available measurement instruments for these constructs capture the conceptual differences between these constructs within a treatment setting. Patients undergoing Total Hip and Total Knee Arthroplasty (THA and TKA) (Total N = 361; 182 THA; 179 TKA), completed the Life Orientation Test-Revised for optimism and pessimism, the Hope Scale, the Credibility Expectancy Questionnaire for treatment credibility and treatment expectancy. Confirmatory factor analysis was used to examine whether the instruments measure distinct constructs. Four theory-driven models with one, two, four and five latent factors were evaluated using multiple fit indices and Δχ2 tests, followed by some posthoc models. The results of the theory driven confirmatory factor analysis showed that a five factor model in which all constructs loaded on separate factors yielded the most optimal and satisfactory fit. Posthoc, a bifactor model in which (besides the 5 separate factors) a general factor is hypothesized accounting for the commonality of the items showed a significantly better fit than the five factor model. All specific factors, except for the hope factor, showed to explain a substantial amount of variance beyond the general factor. Based on our primary analyses we conclude that optimism, pessimism, hope, treatment credibility and treatment expectancy are distinguishable in THA and TKA patients. Postdoc, we determined that all constructs, except hope, showed substantial specific variance, while also sharing some general variance.
Wortmann, Franz J; Wortmann, Gabriele; Haake, Hans-Martin; Eisfeld, Wolf
2014-01-01
Through measurements of three different hair samples (virgin and treated) by the torsional pendulum method (22°C, 22% RH) a systematic decrease of the torsional storage modulus G' with increasing fiber diameter, i.e., polar moment of inertia, is observed. G' is therefore not a material constant for hair. This change of G' implies a systematic component of data variance, which significantly contributes to the limitations of the torsional method for cosmetic claim support. Fitting the data on the basis of a core/shell model for cortex and cuticle enables to separate this systematic component of variance and to greatly enhance the discriminative power of the test. The fitting procedure also provides values for the torsional storage moduli of the morphological components, confirming that the cuticle modulus is substantially higher than that of the cortex. The results give consistent insight into the changes imparted to the morphological components by the cosmetic treatments.
Order-constrained linear optimization.
Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P
2017-11-01
Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.
Experience versus talent shapes the structure of the Web.
Kong, Joseph S; Sarshar, Nima; Roychowdhury, Vwani P
2008-09-16
We use sequential large-scale crawl data to empirically investigate and validate the dynamics that underlie the evolution of the structure of the web. We find that the overall structure of the web is defined by an intricate interplay between experience or entitlement of the pages (as measured by the number of inbound hyperlinks a page already has), inherent talent or fitness of the pages (as measured by the likelihood that someone visiting the page would give a hyperlink to it), and the continual high rates of birth and death of pages on the web. We find that the web is conservative in judging talent and the overall fitness distribution is exponential, showing low variability. The small variance in talent, however, is enough to lead to experience distributions with high variance: The preferential attachment mechanism amplifies these small biases and leads to heavy-tailed power-law (PL) inbound degree distributions over all pages, as well as over pages that are of the same age. The balancing act between experience and talent on the web allows newly introduced pages with novel and interesting content to grow quickly and surpass older pages. In this regard, it is much like what we observe in high-mobility and meritocratic societies: People with entitlement continue to have access to the best resources, but there is just enough screening for fitness that allows for talented winners to emerge and join the ranks of the leaders. Finally, we show that the fitness estimates have potential practical applications in ranking query results.
NASA Astrophysics Data System (ADS)
Liu, J.; Lu, W. Q.
2010-03-01
This paper presents the detailed MD simulation on the properties including the thermal conductivities and viscosities of the quantum fluid helium at different state points. The molecular interactions are represented by the Lennard-Jones pair potentials supplemented by quantum corrections following the Feynman-Hibbs approach and the properties are calculated using the Green-Kubo equations. A comparison is made among the numerical results using LJ and QFH potentials and the existing database and shows that the LJ model is not quantitatively correct for the supercritical liquid helium, thereby the quantum effect must be taken into account when the quantum fluid helium is studied. The comparison of the thermal conductivity is also made as a function of temperatures and pressure and the results show quantum effect correction is an efficient tool to get the thermal conductivities.
NASA Astrophysics Data System (ADS)
Goyal, Ketan; Kawai, Ryoichi
As nanotechnology advances, understanding of the thermodynamic properties of small systems becomes increasingly important. Such systems are found throughout physics, biology, and chemistry manifesting striking properties that are a direct result of their small dimensions where fluctuations become predominant. The standard theory of thermodynamics for macroscopic systems is powerless for such ever fluctuating systems. Furthermore, as small systems are inherently quantum mechanical, influence of quantum effects such as discreteness and quantum entanglement on their thermodynamic properties is of great interest. In particular, the quantum fluctuations due to quantum uncertainty principles may play a significant role. In this talk, we investigate thermodynamic properties of an autonomous quantum heat engine, resembling a quantum version of the Feynman Ratchet, in non-equilibrium condition based on the theory of open quantum systems. The heat engine consists of multiple subsystems individually contacted to different thermal environments.
Nonperturbative dynamics of scalar field theories through the Feynman-Schwinger representation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cetin Savkli; Franz Gross; John Tjon
2004-04-01
In this paper we present a summary of results obtained for scalar field theories using the Feynman-Schwinger (FSR) approach. Specifically, scalar QED and {chi}{sup 2}{phi} theories are considered. The motivation behind the applications discussed in this paper is to use the FSR method as a rigorous tool for testing the quality of commonly used approximations in field theory. Exact calculations in a quenched theory are presented for one-, two-, and three-body bound states. Results obtained indicate that some of the commonly used approximations, such as Bethe-Salpeter ladder summation for bound states and the rainbow summation for one body problems, producemore » significantly different results from those obtained from the FSR approach. We find that more accurate results can be obtained using other, simpler, approximation schemes.« less
Finally making sense of the double-slit experiment.
Aharonov, Yakir; Cohen, Eliahu; Colombo, Fabrizio; Landsberger, Tomer; Sabadini, Irene; Struppa, Daniele C; Tollaksen, Jeff
2017-06-20
Feynman stated that the double-slit experiment "…has in it the heart of quantum mechanics. In reality, it contains the only mystery" and that "nobody can give you a deeper explanation of this phenomenon than I have given; that is, a description of it" [Feynman R, Leighton R, Sands M (1965) The Feynman Lectures on Physics ]. We rise to the challenge with an alternative to the wave function-centered interpretations: instead of a quantum wave passing through both slits, we have a localized particle with nonlocal interactions with the other slit. Key to this explanation is dynamical nonlocality, which naturally appears in the Heisenberg picture as nonlocal equations of motion. This insight led us to develop an approach to quantum mechanics which relies on pre- and postselection, weak measurements, deterministic, and modular variables. We consider those properties of a single particle that are deterministic to be primal. The Heisenberg picture allows us to specify the most complete enumeration of such deterministic properties in contrast to the Schrödinger wave function, which remains an ensemble property. We exercise this approach by analyzing a version of the double-slit experiment augmented with postselection, showing that only it and not the wave function approach can be accommodated within a time-symmetric interpretation, where interference appears even when the particle is localized. Although the Heisenberg and Schrödinger pictures are equivalent formulations, nevertheless, the framework presented here has led to insights, intuitions, and experiments that were missed from the old perspective.
A structural test of the Integrated Motivational-Volitional model of suicidal behaviour.
Dhingra, Katie; Boduszek, Daniel; O'Connor, Rory C
2016-05-30
Suicidal behaviours are highly complex, multi-determined phenomena. Despite this, historically research has tended to focus on bivariate associations between atheoretical demographic and/or psychiatric factors and suicidal behaviour. The aim of this study was to empirically test the Integrated Motivational-Volitional model of suicidal behaviour using structural equation modelling. Healthy adults (N=1809) completed anonymous self-report surveys. The fit of the proposed model was good, and explained 79% of variance in defeat, 83% of variance in entrapment, 61% of variance in suicidal ideation, and 27% of variance in suicide attempts. All proposed paths were significant except for those between goal re-engagement and two factors of suicide resilience (Internal Protective and External Protective) and suicidal ideation; and impulsivity and discomfort intolerance and suicide attempts. These findings represent a preliminary step towards greater clarification of the mechanisms driving suicidal behaviour, and support the utility of basing future research on the Integrated Motivational-Volitional model of suicidal behaviour. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Jacobson, Bailey; Grant, James W A; Peres-Neto, Pedro R
2015-07-01
How individuals within a population distribute themselves across resource patches of varying quality has been an important focus of ecological theory. The ideal free distribution predicts equal fitness amongst individuals in a 1 : 1 ratio with resources, whereas resource defence theory predicts different degrees of monopolization (fitness variance) as a function of temporal and spatial resource clumping and population density. One overlooked landscape characteristic is the spatial distribution of resource patches, altering the equitability of resource accessibility and thereby the effective number of competitors. While much work has investigated the influence of morphology on competitive ability for different resource types, less is known regarding the phenotypic characteristics conferring relative ability for a single resource type, particularly when exploitative competition predominates. Here we used young-of-the-year rainbow trout (Oncorhynchus mykiss) to test whether and how the spatial distribution of resource patches and population density interact to influence the level and variance of individual growth, as well as if functional morphology relates to competitive ability. Feeding trials were conducted within stream channels under three spatial distributions of nine resource patches (distributed, semi-clumped and clumped) at two density levels (9 and 27 individuals). Average trial growth was greater in high-density treatments with no effect of resource distribution. Within-trial growth variance had opposite patterns across resource distributions. Here, variance decreased at low-population, but increased at high-population densities as patches became increasingly clumped as the result of changes in the levels of interference vs. exploitative competition. Within-trial growth was related to both pre- and post-trial morphology where competitive individuals were those with traits associated with swimming capacity and efficiency: larger heads/bodies/caudal fins and less angled pectoral fins. The different degrees of within-population growth variance at the same density level found here, as a function of spatial resource distribution, provide an explanation for the inconsistencies in within-site growth variance and population regulation often noted with regard to density dependence in natural landscapes. © 2015 The Authors. Journal of Animal Ecology © 2015 British Ecological Society.
Jozwik, Kamila M.; Kriegeskorte, Nikolaus; Mur, Marieke
2016-01-01
Object similarity, in brain representations and conscious perception, must reflect a combination of the visual appearance of the objects on the one hand and the categories the objects belong to on the other. Indeed, visual object features and category membership have each been shown to contribute to the object representation in human inferior temporal (IT) cortex, as well as to object-similarity judgments. However, the explanatory power of features and categories has not been directly compared. Here, we investigate whether the IT object representation and similarity judgments are best explained by a categorical or a feature-based model. We use rich models (>100 dimensions) generated by human observers for a set of 96 real-world object images. The categorical model consists of a hierarchically nested set of category labels (such as “human”, “mammal”, and “animal”). The feature-based model includes both object parts (such as “eye”, “tail”, and “handle”) and other descriptive features (such as “circular”, “green”, and “stubbly”). We used non-negative least squares to fit the models to the brain representations (estimated from functional magnetic resonance imaging data) and to similarity judgments. Model performance was estimated on held-out images not used in fitting. Both models explained significant variance in IT and the amounts explained were not significantly different. The combined model did not explain significant additional IT variance, suggesting that it is the shared model variance (features correlated with categories, categories correlated with features) that best explains IT. The similarity judgments were almost fully explained by the categorical model, which explained significantly more variance than the feature-based model. The combined model did not explain significant additional variance in the similarity judgments. Our findings suggest that IT uses features that help to distinguish categories as stepping stones toward a semantic representation. Similarity judgments contain additional categorical variance that is not explained by visual features, reflecting a higher-level more purely semantic representation. PMID:26493748
Jozwik, Kamila M; Kriegeskorte, Nikolaus; Mur, Marieke
2016-03-01
Object similarity, in brain representations and conscious perception, must reflect a combination of the visual appearance of the objects on the one hand and the categories the objects belong to on the other. Indeed, visual object features and category membership have each been shown to contribute to the object representation in human inferior temporal (IT) cortex, as well as to object-similarity judgments. However, the explanatory power of features and categories has not been directly compared. Here, we investigate whether the IT object representation and similarity judgments are best explained by a categorical or a feature-based model. We use rich models (>100 dimensions) generated by human observers for a set of 96 real-world object images. The categorical model consists of a hierarchically nested set of category labels (such as "human", "mammal", and "animal"). The feature-based model includes both object parts (such as "eye", "tail", and "handle") and other descriptive features (such as "circular", "green", and "stubbly"). We used non-negative least squares to fit the models to the brain representations (estimated from functional magnetic resonance imaging data) and to similarity judgments. Model performance was estimated on held-out images not used in fitting. Both models explained significant variance in IT and the amounts explained were not significantly different. The combined model did not explain significant additional IT variance, suggesting that it is the shared model variance (features correlated with categories, categories correlated with features) that best explains IT. The similarity judgments were almost fully explained by the categorical model, which explained significantly more variance than the feature-based model. The combined model did not explain significant additional variance in the similarity judgments. Our findings suggest that IT uses features that help to distinguish categories as stepping stones toward a semantic representation. Similarity judgments contain additional categorical variance that is not explained by visual features, reflecting a higher-level more purely semantic representation. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation.
Yang, Ye; Christensen, Ole F; Sorensen, Daniel
2011-02-01
Over recent years, statistical support for the presence of genetic factors operating at the level of the environmental variance has come from fitting a genetically structured heterogeneous variance model to field or experimental data in various species. Misleading results may arise due to skewness of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box-Cox transformations. Litter size data in rabbits and pigs that had previously been analysed in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box-Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected by the presence of asymmetry in the distribution of data. We recommend that to avoid one important source of spurious inferences, future work seeking support for a genetic component acting on environmental variation using a parametric approach based on normality assumptions confirms that these are met.
Origin and Consequences of the Relationship between Protein Mean and Variance
Vallania, Francesco Luigi Massimo; Sherman, Marc; Goodwin, Zane; Mogno, Ilaria; Cohen, Barak Alon; Mitra, Robi David
2014-01-01
Cell-to-cell variance in protein levels (noise) is a ubiquitous phenomenon that can increase fitness by generating phenotypic differences within clonal populations of cells. An important challenge is to identify the specific molecular events that control noise. This task is complicated by the strong dependence of a protein's cell-to-cell variance on its mean expression level through a power-law like relationship (σ2∝μ1.69). Here, we dissect the nature of this relationship using a stochastic model parameterized with experimentally measured values. This framework naturally recapitulates the power-law like relationship (σ2∝μ1.6) and accurately predicts protein variance across the yeast proteome (r2 = 0.935). Using this model we identified two distinct mechanisms by which protein variance can be increased. Variables that affect promoter activation, such as nucleosome positioning, increase protein variance by changing the exponent of the power-law relationship. In contrast, variables that affect processes downstream of promoter activation, such as mRNA and protein synthesis, increase protein variance in a mean-dependent manner following the power-law. We verified our findings experimentally using an inducible gene expression system in yeast. We conclude that the power-law-like relationship between noise and protein mean is due to the kinetics of promoter activation. Our results provide a framework for understanding how molecular processes shape stochastic variation across the genome. PMID:25062021
Faces and fitness: attractive evolutionary relationship or ugly hypothesis?
Smoliga, James M; Zavorsky, Gerald S
2015-11-01
In recent years, various studies have attempted to understand human evolution by examining relationships between athletic performance or physical fitness and facial attractiveness. Over a wide range of five homogeneous groups (n = 327), there is an approximate 3% shared variance between facial attractiveness and athletic performance or physical fitness (95% CI = 0.5-8%, p = 0.002). Further, studies relating human performance and attractiveness often have major methodological limitations that limit their generalizability. Thus, despite statistical significance, the association between facial attractiveness and human performance has questionable biological importance. Here, we present a critique of these studies and provide recommendations to improve the quality of future research in this realm. © 2015 The Author(s).
Fong, Youyi; Yu, Xuesong
2016-01-01
Many modern serial dilution assays are based on fluorescence intensity (FI) readouts. We study optimal transformation model choice for fitting five parameter logistic curves (5PL) to FI-based serial dilution assay data. We first develop a generalized least squares-pseudolikelihood type algorithm for fitting heteroscedastic logistic models. Next we show that the 5PL and log 5PL functions can approximate each other well. We then compare four 5PL models with different choices of log transformation and variance modeling through a Monte Carlo study and real data. Our findings are that the optimal choice depends on the intended use of the fitted curves. PMID:27642502
Variance analysis of forecasted streamflow maxima in a wet temperate climate
NASA Astrophysics Data System (ADS)
Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.
2018-05-01
Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.
Evidence of validity of the Stress-Producing Life Events (SPLE) instrument.
Rizzini, Marta; Santos, Alcione Miranda Dos; Silva, Antônio Augusto Moura da
2018-01-01
OBJECTIVE Evaluate the construct validity of a list of eight Stressful Life Events in pregnant women. METHODS A cross-sectional study was conducted with 1,446 pregnant women in São Luís, MA, and 1,364 pregnant women in Ribeirão Preto, SP (BRISA cohort), from February 2010 to June 2011. In the exploratory factorial analysis, the promax oblique rotation was used and for the calculation of the internal consistency, we used the compound reliability. The construct validity was determined by means of the confirmatory factorial analysis with the method of estimation of weighted least squares adjusted by the mean and variance. RESULTS The model with the best fit in the exploratory analysis was the one that retained three factors with a cumulative variance of 61.1%. The one-factor model did not obtain a good fit in both samples in the confirmatory analysis. The three-factor model called Stress-Producing Life Events presented a good fit (RMSEA < 0.05; CFI/TLI > 0.90) for both samples. CONCLUSIONS The Stress-Producing Life Events constitute a second order construct with three dimensions related to health, personal and financial aspects and violence. This study found evidence that confirms the construct validity of a list of stressor events, entitled Stress-Producing Life Events Inventory.
Measuring kinetics of complex single ion channel data using mean-variance histograms.
Patlak, J B
1993-01-01
The measurement of single ion channel kinetics is difficult when those channels exhibit subconductance events. When the kinetics are fast, and when the current magnitudes are small, as is the case for Na+, Ca2+, and some K+ channels, these difficulties can lead to serious errors in the estimation of channel kinetics. I present here a method, based on the construction and analysis of mean-variance histograms, that can overcome these problems. A mean-variance histogram is constructed by calculating the mean current and the current variance within a brief "window" (a set of N consecutive data samples) superimposed on the digitized raw channel data. Systematic movement of this window over the data produces large numbers of mean-variance pairs which can be assembled into a two-dimensional histogram. Defined current levels (open, closed, or sublevel) appear in such plots as low variance regions. The total number of events in such low variance regions is estimated by curve fitting and plotted as a function of window width. This function decreases with the same time constants as the original dwell time probability distribution for each of the regions. The method can therefore be used: 1) to present a qualitative summary of the single channel data from which the signal-to-noise ratio, open channel noise, steadiness of the baseline, and number of conductance levels can be quickly determined; 2) to quantify the dwell time distribution in each of the levels exhibited. In this paper I present the analysis of a Na+ channel recording that had a number of complexities. The signal-to-noise ratio was only about 8 for the main open state, open channel noise, and fast flickers to other states were present, as were a substantial number of subconductance states. "Standard" half-amplitude threshold analysis of these data produce open and closed time histograms that were well fitted by the sum of two exponentials, but with apparently erroneous time constants, whereas the mean-variance histogram technique provided a more credible analysis of the open, closed, and subconductance times for the patch. I also show that the method produces accurate results on simulated data in a wide variety of conditions, whereas the half-amplitude method, when applied to complex simulated data shows the same errors as were apparent in the real data. The utility and the limitations of this new method are discussed. Images FIGURE 2 FIGURE 4 FIGURE 8 FIGURE 9 PMID:7690261
A Novel Statistical Analysis and Interpretation of Flow Cytometry Data
2013-07-05
ing parameters of the mathematical model are determined by using least squares to fit the data in Figures 1 and 2 (see Section 4). The second method is...variance (for all k and j). Then the AIC, which is the expected value of the relative Kullback – Leibler distance for a given model [16], is Kn = n log ( J...emphasized that the fit of the model is quite good for both donors and cell types. As such, we proceed to analyse the dynamic responsiveness of the
Does the Assessment of Recovery Capital scale reflect a single or multiple domains?
Arndt, Stephan; Sahker, Ethan; Hedden, Suzy
2017-01-01
The goal of this study was to determine whether the 50-item Assessment of Recovery Capital scale represents a single general measure or whether multiple domains might be psychometrically useful for research or clinical applications. Data are from a cross-sectional de-identified existing program evaluation information data set with 1,138 clients entering substance use disorder treatment. Principal components and iterated factor analysis were used on the domain scores. Multiple group factor analysis provided a quasi-confirmatory factor analysis. The solution accounted for 75.24% of the total variance, suggesting that 10 factors provide a reasonably good fit. However, Tucker's congruence coefficients between the factor structure and defining weights (0.41-0.52) suggested a poor fit to the hypothesized 10-domain structure. Principal components of the 10-domain scores yielded one factor whose eigenvalue was greater than one (5.93), accounting for 75.8% of the common variance. A few domains had perceptible but small unique variance components suggesting that a few of the domains may warrant enrichment. Our findings suggest that there is one general factor, with a caveat. Using the 10 measures inflates the chance for Type I errors. Using one general measure avoids this issue, is simple to interpret, and could reduce the number of items. However, those seeking to maximally predict later recovery success may need to use the full instrument and all 10 domains.
Wyss, Thomas; Boesch, Maria; Roos, Lilian; Tschopp, Céline; Frei, Klaus M; Annen, Hubert; La Marca, Roberto
2016-12-01
Good physical fitness seems to help the individual to buffer the potential harmful impact of psychosocial stress on somatic and mental health. The aim of the present study is to investigate the role of physical fitness levels on the autonomic nervous system (ANS; i.e. heart rate and salivary alpha amylase) responses to acute psychosocial stress, while controlling for established factors influencing individual stress reactions. The Trier Social Stress Test for Groups (TSST-G) was executed with 302 male recruits during their first week of Swiss Army basic training. Heart rate was measured continuously, and salivary alpha amylase was measured twice, before and after the stress intervention. In the same week, all volunteers participated in a physical fitness test and they responded to questionnaires on lifestyle factors and personal traits. A multiple linear regression analysis was conducted to determine ANS responses to acute psychosocial stress from physical fitness test performances, controlling for personal traits, behavioural factors, and socioeconomic data. Multiple linear regression revealed three variables predicting 15 % of the variance in heart rate response (area under the individual heart rate response curve during TSST-G) and four variables predicting 12 % of the variance in salivary alpha amylase response (salivary alpha amylase level immediately after the TSST-G) to acute psychosocial stress. A strong performance at the progressive endurance run (high maximal oxygen consumption) was a significant predictor of ANS response in both models: low area under the heart rate response curve during TSST-G as well as low salivary alpha amylase level after TSST-G. Further, high muscle power, non-smoking, high extraversion, and low agreeableness were predictors of a favourable ANS response in either one of the two dependent variables. Good physical fitness, especially good aerobic endurance capacity, is an important protective factor against health-threatening reactions to acute psychosocial stress.
Mota, L F M; Martins, P G M A; Littiere, T O; Abreu, L R A; Silva, M A; Bonafé, C M
2018-04-01
The objective was to estimate (co)variance functions using random regression models (RRM) with Legendre polynomials, B-spline function and multi-trait models aimed at evaluating genetic parameters of growth traits in meat-type quail. A database containing the complete pedigree information of 7000 meat-type quail was utilized. The models included the fixed effects of contemporary group and generation. Direct additive genetic and permanent environmental effects, considered as random, were modeled using B-spline functions considering quadratic and cubic polynomials for each individual segment, and Legendre polynomials for age. Residual variances were grouped in four age classes. Direct additive genetic and permanent environmental effects were modeled using 2 to 4 segments and were modeled by Legendre polynomial with orders of fit ranging from 2 to 4. The model with quadratic B-spline adjustment, using four segments for direct additive genetic and permanent environmental effects, was the most appropriate and parsimonious to describe the covariance structure of the data. The RRM using Legendre polynomials presented an underestimation of the residual variance. Lesser heritability estimates were observed for multi-trait models in comparison with RRM for the evaluated ages. In general, the genetic correlations between measures of BW from hatching to 35 days of age decreased as the range between the evaluated ages increased. Genetic trend for BW was positive and significant along the selection generations. The genetic response to selection for BW in the evaluated ages presented greater values for RRM compared with multi-trait models. In summary, RRM using B-spline functions with four residual variance classes and segments were the best fit for genetic evaluation of growth traits in meat-type quail. In conclusion, RRM should be considered in genetic evaluation of breeding programs.
Olivoto, T; Nardino, M; Carvalho, I R; Follmann, D N; Ferrari, M; Szareski, V J; de Pelegrin, A J; de Souza, V Q
2017-03-22
Methodologies using restricted maximum likelihood/best linear unbiased prediction (REML/BLUP) in combination with sequential path analysis in maize are still limited in the literature. Therefore, the aims of this study were: i) to use REML/BLUP-based procedures in order to estimate variance components, genetic parameters, and genotypic values of simple maize hybrids, and ii) to fit stepwise regressions considering genotypic values to form a path diagram with multi-order predictors and minimum multicollinearity that explains the relationships of cause and effect among grain yield-related traits. Fifteen commercial simple maize hybrids were evaluated in multi-environment trials in a randomized complete block design with four replications. The environmental variance (78.80%) and genotype-vs-environment variance (20.83%) accounted for more than 99% of the phenotypic variance of grain yield, which difficult the direct selection of breeders for this trait. The sequential path analysis model allowed the selection of traits with high explanatory power and minimum multicollinearity, resulting in models with elevated fit (R 2 > 0.9 and ε < 0.3). The number of kernels per ear (NKE) and thousand-kernel weight (TKW) are the traits with the largest direct effects on grain yield (r = 0.66 and 0.73, respectively). The high accuracy of selection (0.86 and 0.89) associated with the high heritability of the average (0.732 and 0.794) for NKE and TKW, respectively, indicated good reliability and prospects of success in the indirect selection of hybrids with high-yield potential through these traits. The negative direct effect of NKE on TKW (r = -0.856), however, must be considered. The joint use of mixed models and sequential path analysis is effective in the evaluation of maize-breeding trials.
The following SAS macros can be used to create a multivariate usual intake distribution for multiple dietary components that are consumed nearly every day or episodically. A SAS macro for performing balanced repeated replication (BRR) variance estimation is also included.
The Ghost of Electricity: A History of Electron Theory from 1897 to 1987.
ERIC Educational Resources Information Center
Adams, S. F.
1988-01-01
Discusses the history of electron theory from 1897 to 1987. Includes the works of some physicists, such as Thomson, Lorentz, De Broglie, Bohr, Pauli, Dirac, Feynman, Wheeler, Weinberg, and Salam. (YP)
Energies of Screened Coulomb Potentials.
ERIC Educational Resources Information Center
Lai, C. S.
1979-01-01
This article shows that, by applying the Hellman-Feynman theorem alone to screened Coulomb potentials, the first four coefficients in the energy series in powers of the perturbation parameter can be obtained from the unperturbed Coulomb system. (Author/HM)
NASA Astrophysics Data System (ADS)
Rosen, Charles; Siegel, Edward Carl-Ludwig; Feynman, Richard; Wunderman, Irwin; Smith, Adolph; Marinov, Vesco; Goldman, Jacob; Brine, Sergey; Poge, Larry; Schmidt, Erich; Young, Frederic; Goates-Bulmer, William-Steven; Lewis-Tsurakov-Altshuler, Thomas-Valerie-Genot; Ibm/Exxon Collaboration; Google/Uw Collaboration; Microsoft/Amazon Collaboration; Oracle/Sun Collaboration; Ostp/Dod/Dia/Nsa/W.-F./Boa/Ubs/Ub Collaboration
2013-03-01
Belew[Finding Out About, Cambridge(2000)] and separately full-decade pre-Page/Brin/Google FIRST Siegel-Rosen(Machine-Intelligence/Atherton)-Feynman-Smith-Marinov(Guzik Enterprises/Exxon-Enterprises/A.-I./Santa Clara)-Wunderman(H.-P.) [IBM Conf. on Computers and Mathematics, Stanford(1986); APS Mtgs.(1980s): Palo Alto/Santa Clara/San Francisco/...(1980s) MRS Spring-Mtgs.(1980s): Palo Alto/San Jose/San Francisco/...(1980-1992) FIRST quantum-computing via Bose-Einstein quantum-statistics(BEQS) Bose-Einstein CONDENSATION (BEC) in artificial-intelligence(A-I) artificial neural-networks(A-N-N) and biological neural-networks(B-N-N) and Siegel[J. Noncrystalline-Solids 40, 453(1980); Symp. on Fractals..., MRS Fall-Mtg., Boston(1989)-5-papers; Symp. on Scaling..., (1990); Symp. on Transport in Geometric-Constraint (1990)
From Feynman rules to conserved quantum numbers, I
NASA Astrophysics Data System (ADS)
Nogueira, P.
2017-05-01
In the context of Quantum Field Theory (QFT) there is often the need to find sets of graph-like diagrams (the so-called Feynman diagrams) for a given physical model. If negative, the answer to the related problem 'Are there any diagrams with this set of external fields?' may settle certain physical questions at once. Here the latter problem is formulated in terms of a system of linear diophantine equations derived from the Lagrangian density, from which necessary conditions for the existence of the required diagrams may be obtained. Those conditions are equalities that look like either linear diophantine equations or linear modular (i.e. congruence) equations, and may be found by means of fairly simple algorithms that involve integer computations. The diophantine equations so obtained represent (particle) number conservation rules, and are related to the conserved (additive) quantum numbers that may be assigned to the fields of the model.
Tchouar, N; Ould-Kaddour, F; Levesque, D
2004-10-15
The properties of liquid methane, liquid neon, and gas helium are calculated at low temperatures over a large range of pressure from the classical molecular-dynamics simulations. The molecular interactions are represented by the Lennard-Jones pair potentials supplemented by quantum corrections following the Feynman-Hibbs approach. The equations of state, diffusion, and shear viscosity coefficients are determined for neon at 45 K, helium at 80 K, and methane at 110 K. A comparison is made with the existing experimental data and for thermodynamical quantities, with results computed from quantum numerical simulations when they are available. The theoretical variation of the viscosity coefficient with pressure is in good agreement with the experimental data when the quantum corrections are taken into account, thus reducing considerably the 60% discrepancy between the simulations and experiments in the absence of these corrections.
Sv-map between type I and heterotic sigma models
NASA Astrophysics Data System (ADS)
Fan, Wei; Fotopoulos, A.; Stieberger, S.; Taylor, T. R.
2018-05-01
The scattering amplitudes of gauge bosons in heterotic and open superstring theories are related by the single-valued projection which yields heterotic amplitudes by selecting a subset of multiple zeta value coefficients in the α‧ (string tension parameter) expansion of open string amplitudes. In the present work, we argue that this relation holds also at the level of low-energy expansions (or individual Feynman diagrams) of the respective effective actions, by investigating the beta functions of two-dimensional sigma models describing world-sheets of open and heterotic strings. We analyze the sigma model Feynman diagrams generating identical effective action terms in both theories and show that the heterotic coefficients are given by the single-valued projection of the open ones. The single-valued projection appears as a result of summing over all radial orderings of heterotic vertices on the complex plane representing string world-sheet.
Interference with electrons: from thought to real experiments
NASA Astrophysics Data System (ADS)
Matteucci, Giorgio
2013-11-01
The two-slit interference experiment is usually adopted to discuss the superposition principle applied to radiation and to show the peculiar wave behaviour of material particles. Diffraction and interference of electrons have been demonstrated using, as interferometry devices, a hole, a slit, double hole, two-slits, an electrostatic biprism etc. A number of books, short movies and lectures on the web try to popularize the mysterious behaviour of electrons on the basis of Feynman thought experiment which consists of a Young two-hole interferometer equipped with a detector to reveal single electrons. A short review is reported regarding, i) the pioneering attempts carried out to demonstrate that interference patterns could be obtained with single electrons through an interferometer and, ii) recent experiments, which can be considered as the realization of the thought electron interference experiments adopted by Einstein-Bohr and subsequently by Feynman to discuss key features of quantum physics.
Higher-order gravitational lensing reconstruction using Feynman diagrams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenkins, Elizabeth E.; Manohar, Aneesh V.; Yadav, Amit P.S.
2014-09-01
We develop a method for calculating the correlation structure of the Cosmic Microwave Background (CMB) using Feynman diagrams, when the CMB has been modified by gravitational lensing, Faraday rotation, patchy reionization, or other distorting effects. This method is used to calculate the bias of the Hu-Okamoto quadratic estimator in reconstructing the lensing power spectrum up to O (φ{sup 4}) in the lensing potential φ. We consider both the diagonal noise TT TT, EB EB, etc. and, for the first time, the off-diagonal noise TT TE, TB EB, etc. The previously noted large O (φ{sup 4}) term in the second order noise ismore » identified to come from a particular class of diagrams. It can be significantly reduced by a reorganization of the φ expansion. These improved estimators have almost no bias for the off-diagonal case involving only one B component of the CMB, such as EE EB.« less
Quantization of gauge fields, graph polynomials and graph homology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kreimer, Dirk, E-mail: kreimer@physik.hu-berlin.de; Sars, Matthias; Suijlekom, Walter D. van
2013-09-15
We review quantization of gauge fields using algebraic properties of 3-regular graphs. We derive the Feynman integrand at n loops for a non-abelian gauge theory quantized in a covariant gauge from scalar integrands for connected 3-regular graphs, obtained from the two Symanzik polynomials. The transition to the full gauge theory amplitude is obtained by the use of a third, new, graph polynomial, the corolla polynomial. This implies effectively a covariant quantization without ghosts, where all the relevant signs of the ghost sector are incorporated in a double complex furnished by the corolla polynomial–we call it cycle homology–and by graph homology.more » -- Highlights: •We derive gauge theory Feynman from scalar field theory with 3-valent vertices. •We clarify the role of graph homology and cycle homology. •We use parametric renormalization and the new corolla polynomial.« less
NASA Astrophysics Data System (ADS)
Ma, Chao; Ma, Qinghua; Yao, Haixiang; Hou, Tiancheng
2018-03-01
In this paper, we propose to use the Fractional Stable Process (FSP) for option pricing. The FSP is one of the few candidates to directly model a number of desired empirical properties of asset price risk neutral dynamics. However, pricing the vanilla European option under FSP is difficult and problematic. In the paper, built upon the developed Feynman Path Integral inspired techniques, we present a novel computational model for option pricing, i.e. the Fractional Stable Process Path Integral (FSPPI) model under a general fractional stable distribution that tackles this problem. Numerical and empirical experiments show that the proposed pricing model provides a correction of the Black-Scholes pricing error - overpricing long term options, underpricing short term options; overpricing out-of-the-money options, underpricing in-the-money options without any additional structures such as stochastic volatility and a jump process.
The electromigration force in metallic bulk
NASA Astrophysics Data System (ADS)
Lodder, A.; Dekker, J. P.
1998-01-01
The voltage induced driving force on a migrating atom in a metallic system is discussed in the perspective of the Hellmann-Feynman force concept, local screening concepts and the linear-response approach. Since the force operator is well defined in quantum mechanics it appears to be only confusing to refer to the Hellmann-Feynman theorem in the context of electromigration. Local screening concepts are shown to be mainly of historical value. The physics involved is completely represented in ab initio local density treatments of dilute alloys and the implementation does not require additional precautions about screening, being typical for jellium treatments. The linear-response approach is shown to be a reliable guide in deciding about the two contributions to the driving force, the direct force and the wind force. Results are given for the wind valence for electromigration in a number of FCC and BCC metals, calculated using an ab initio KKR-Green's function description of a dilute alloy.
Speech perception and quality of life of open-fit hearing aid users
GARCIA, Tatiana Manfrini; JACOB, Regina Tangerino de Souza; MONDELLI, Maria Fernanda Capoani Garcia
2016-01-01
ABSTRACT Objective To relate the performance of individuals with hearing loss at high frequencies in speech perception with the quality of life before and after the fitting of an open-fit hearing aid (HA). Methods The WHOQOL-BREF had been used before the fitting and 90 days after the use of HA. The Hearing in Noise Test (HINT) had been conducted in two phases: (1) at the time of fitting without an HA (situation A) and with an HA (situation B); (2) with an HA 90 days after fitting (situation C). Study Sample Thirty subjects with sensorineural hearing loss at high frequencies. Results By using an analysis of variance and the Tukey’s test comparing the three HINT situations in quiet and noisy environments, an improvement has been observed after the HA fitting. The results of the WHOQOL-BREF have showed an improvement in the quality of life after the HA fitting (paired t-test). The relationship between speech perception and quality of life before the HA fitting indicated a significant relationship between speech recognition in noisy environments and in the domain of social relations after the HA fitting (Pearson’s correlation coefficient). Conclusions The auditory stimulation has improved speech perception and the quality of life of individuals. PMID:27383708
CMB-S4 and the hemispherical variance anomaly
NASA Astrophysics Data System (ADS)
O'Dwyer, Márcio; Copi, Craig J.; Knox, Lloyd; Starkman, Glenn D.
2017-09-01
Cosmic microwave background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the Northern and Southern Ecliptic hemispheres, with the Northern hemisphere displaying an anomalously low variance while the Southern hemisphere appears unremarkable [consistent with expectations from the best-fitting theory, Lambda Cold Dark Matter (ΛCDM)]. While this is a well-established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground-based telescope at the high Chilean Atacama plateau. We find that even in the set of realizations constrained by the temperature data, the low Northern hemisphere variance observed in temperature is not expected in polarization. Therefore, observing an anomalously low variance in polarization would make the hypothesis that the temperature anomaly is simply a statistical fluke more unlikely and thus increase the motivation for physical explanations. We show, within ΛCDM, how variance measurements in both sky coverage scenarios are related. We find that the variance makes for a good statistic in cases where the sky coverage is limited, however, full northern coverage is still preferable.
Experience versus talent shapes the structure of the Web
Kong, Joseph S.; Sarshar, Nima; Roychowdhury, Vwani P.
2008-01-01
We use sequential large-scale crawl data to empirically investigate and validate the dynamics that underlie the evolution of the structure of the web. We find that the overall structure of the web is defined by an intricate interplay between experience or entitlement of the pages (as measured by the number of inbound hyperlinks a page already has), inherent talent or fitness of the pages (as measured by the likelihood that someone visiting the page would give a hyperlink to it), and the continual high rates of birth and death of pages on the web. We find that the web is conservative in judging talent and the overall fitness distribution is exponential, showing low variability. The small variance in talent, however, is enough to lead to experience distributions with high variance: The preferential attachment mechanism amplifies these small biases and leads to heavy-tailed power-law (PL) inbound degree distributions over all pages, as well as over pages that are of the same age. The balancing act between experience and talent on the web allows newly introduced pages with novel and interesting content to grow quickly and surpass older pages. In this regard, it is much like what we observe in high-mobility and meritocratic societies: People with entitlement continue to have access to the best resources, but there is just enough screening for fitness that allows for talented winners to emerge and join the ranks of the leaders. Finally, we show that the fitness estimates have potential practical applications in ranking query results. PMID:18779560
Association of Quality Physical Education Teaching with Students’ Physical Fitness
Chen, Weiyun; Mason, Steve; Hypnar, Andrew; Hammond-Bennett, Austin
2016-01-01
This study examined the extent to which four essential dimensions of quality physical education teaching (QPET) were associated with healthy levels of physical fitness in elementary school students. Participants were nine elementary PE teachers and 1, 201 fourth- and fifth-grade students who were enrolled in nine elementary schools. The students’ physical fitness were assessed using four FITNESSGRAM tests. The PE teachers’ levels of QPET were assessed using the Assessing Quality Teaching Rubrics (AQTR). The AQTR consisted of four essential dimensions including Task Design, Task Presentation, Class Management, and Instructional Guidance. Codes were confirmed through inter-rater reliability (82.4% and 84.5%). Data were analyzed through descriptive statistics, multiple R-squared regression models, and independent sample t-tests. The four essential teaching dimensions of QPET were significantly associated with the students’ cardiovascular endurance, muscular strength and endurance, and flexibility. However, they accounted for relatively low percentage of the total variance in PACER test, followed by Curl-up test, while explaining very low portions of the total variance in Push-up and Trunk Lift tests. This study indicated that the students who had experienced high level of QPET were more physically fit than their peers who did not have this experience in PACER and Curl-up tests, but not in Push-up and Trunk lift tests. In addition, the significant contribution of the four essential teaching dimensions to physical fitness components was gender-specific. It was concluded that the four teaching dimensions of QPET were significantly associated with students’ health-enhancing physical fitness. Key points Although Task Design, Task Presentation, Class Management, and Instructional Guidance has its unique and critical teaching components, each essential teaching dimensions is intertwined and immersed in teaching practices. Four essential teaching dimensions all significantly contributed to students’ health-enhancing physical fitness. Implementation of QPET in a lesson plays more significant role in contributing to improving girls’ cardiovascular endurance. Implementation of QPET in a lesson contributed significantly to improving boy’s abdominal, upper-body, and back extensor muscular strength and endurance as well as flexibility PMID:27274673
The relationship between sustained attention and aerobic fitness in a group of young adults.
Ciria, Luis F; Perakakis, Pandelis; Luque-Casado, Antonio; Morato, Cristina; Sanabria, Daniel
2017-01-01
A growing set of studies has shown a positive relationship between aerobic fitness and a broad array of cognitive functions. However, few studies have focused on sustained attention, which has been considered a fundamental cognitive process that underlies most everyday activities. The purpose of this study was to investigate the role of aerobic fitness as a key factor in sustained attention capacities in young adults. Forty-four young adults (18-23 years) were divided into two groups as a function of the level of aerobic fitness (high-fit and low-fit). Participants completed the Psychomotor Vigilance Task (PVT) and an oddball task where they had to detect infrequent targets presented among frequent non-targets. The analysis of variance (ANOVA) showed faster responses for the high-fit group than for the low-fit group in the PVT, replicating previous accounts. In the oddball task, the high-fit group maintained their accuracy (ACC) rate of target detection over time, while the low-fit group suffered a significant decline of response ACC throughout the task. Importantly, the results show that the greater sustained attention capacity of high-fit young adults is not specific to a reaction time (RT) sustained attention task like the PVT, but it is also evident in an ACC oddball task. In sum, the present findings point to the important role of aerobic fitness on sustained attention capacities in young adults.
Spin foam models for quantum gravity
NASA Astrophysics Data System (ADS)
Perez, Alejandro
The definition of a quantum theory of gravity is explored following Feynman's path-integral approach. The aim is to construct a well defined version of the Wheeler-Misner- Hawking ``sum over four geometries'' formulation of quantum general relativity (GR). This is done by means of exploiting the similarities between the formulation of GR in terms of tetrad-connection variables (Palatini formulation) and a simpler theory called BF theory. One can go from BF theory to GR by imposing certain constraints on the BF-theory configurations. BF theory contains only global degrees of freedom (topological theory) and it can be exactly quantized á la Feynman introducing a discretization of the manifold. Using the path integral for BF theory we define a path integration for GR imposing the BF-to-GR constraints on the BF measure. The infinite degrees of freedom of gravity are restored in the process, and the restriction to a single discretization introduces a cut- off in the summed-over configurations. In order to capture all the degrees of freedom a sum over discretization is implemented. Both the implementation of the BF-to-GR constraints and the sum over discretizations are obtained by means of the introduction of an auxiliary field theory (AFT). 4-geometries in the path integral for GR are given by the Feynman diagrams of the AFT which is in this sense dual to GR. Feynman diagrams correspond to 2-complexes labeled by unitary irreducible representations of the internal gauge group (corresponding to tetrad rotation in the connection to GR). A model for 4-dimensional Euclidean quantum gravity (QG) is defined which corresponds to a different normalization of the Barrett-Crane model. The model is perturbatively finite; divergences appearing in the Barrett-Crane model are cured by the new normalization. We extend our techniques to the Lorentzian sector, where we define two models for four-dimensional QG. The first one contains only time-like representations and is shown to be perturbatively finite. The second model contains both time-like and space-like representations. The spectrum of geometrical operators coincide with the prediction of the canonical approach of loop QG. At the moment, the convergence properties of the model are less understood and remain for future investigation.
Predicting job satisfaction: a new perspective on person-environment fit.
Hardin, Erin E; Donaldson, James R
2014-10-01
There may be 2 ways to look at person-environment (P-E) fit: the extent to which the environment matches the person (which, in the case of person-job [P-J] fit, we term ideal-job actualization) and the extent to which the person matches the environment (which we term actual-job regard; cf. Hardin & Larsen, 2014). Adults employed full time in the United States (n = 251; 49.8% women) completed an online survey that included measures assessing these 2 perspectives on P-J fit, along with measures of job and life satisfaction. Ideal-job actualization and actual-job regard were empirically and conceptually distinct, each accounting for unique variance in overall job satisfaction, even after controlling for overall life satisfaction and remuneration. Looking at fit from these 2 frames of reference may give a more complete perspective that accounts for critical outcomes, like satisfaction, as well as suggest novel approaches to career counseling. PsycINFO Database Record (c) 2014 APA, all rights reserved.
How did the swiss cheese plant get its holes?
Muir, Christopher D
2013-02-01
Adult leaf fenestration in "Swiss cheese" plants (Monstera Adans.) is an unusual leaf shape trait lacking a convincing evolutionary explanation. Monstera are secondary hemiepiphytes that inhabit the understory of tropical rainforests, where photosynthesis from sunflecks often makes up a large proportion of daily carbon assimilation. Here I present a simple model of leaf-level photosynthesis and whole-plant canopy dynamics in a stochastic light environment. The model demonstrates that leaf fenestration can reduce the variance in plant growth and thereby increase geometric mean fitness. This growth-variance hypothesis also suggests explanations for conspicuous ontogenetic changes in leaf morphology (heteroblasty) in Monstera, as well as the absence of leaf fenestration in co-occurring juvenile tree species. The model provides a testable hypothesis of the adaptive significance of a unique leaf shape and illustrates how variance in growth rate could be an important factor shaping plant morphology and physiology.
Multilevel models for estimating incremental net benefits in multinational studies.
Grieve, Richard; Nixon, Richard; Thompson, Simon G; Cairns, John
2007-08-01
Multilevel models (MLMs) have been recommended for estimating incremental net benefits (INBs) in multicentre cost-effectiveness analysis (CEA). However, these models have assumed that the INBs are exchangeable and that there is a common variance across all centres. This paper examines the plausibility of these assumptions by comparing various MLMs for estimating the mean INB in a multinational CEA. The results showed that the MLMs that assumed the INBs were exchangeable and had a common variance led to incorrect inferences. The MLMs that included covariates to allow for systematic differences across the centres, and estimated different variances in each centre, made more plausible assumptions, fitted the data better and led to more appropriate inferences. We conclude that the validity of assumptions underlying MLMs used in CEA need to be critically evaluated before reliable conclusions can be drawn. Copyright 2006 John Wiley & Sons, Ltd.
Testing Small Variance Priors Using Prior-Posterior Predictive p Values.
Hoijtink, Herbert; van de Schoot, Rens
2017-04-03
Muthén and Asparouhov (2012) propose to evaluate model fit in structural equation models based on approximate (using small variance priors) instead of exact equality of (combinations of) parameters to zero. This is an important development that adequately addresses Cohen's (1994) The Earth is Round (p < .05), which stresses that point null-hypotheses are so precise that small and irrelevant differences from the null-hypothesis may lead to their rejection. It is tempting to evaluate small variance priors using readily available approaches like the posterior predictive p value and the DIC. However, as will be shown, both are not suited for the evaluation of models based on small variance priors. In this article, a well behaving alternative, the prior-posterior predictive p value, will be introduced. It will be shown that it is consistent, the distributions under the null and alternative hypotheses will be elaborated, and it will be applied to testing whether the difference between 2 means and the size of a correlation are relevantly different from zero. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Aad, G.; Abbott, B.; Abdallah, J.; ...
2015-02-10
We measure the transverse polarization of Λ and Λ¯ hyperons produced in proton-proton collisions at a center-of mass energy of 7 TeV is measured. The analysis uses 760 μb ₋1 of minimum bias data collected by the ATLAS detector at the LHC in the year 2010. The measured transverse polarization averaged over Feynman x F from 5 × 10 ₋5 to 0.01 and transverse momentum p T from 0.8 to 15 GeV is ₋0.010 ± 0.005(stat) ± 0.004(syst) for Λ and 0.002 ± 0.006(stat) ± 0.004(syst) for Λ¯ . It is also measured as a function of x F andmore » p T, but we observed no significant dependence on these variables. Prior to this measurement, the polarization was measured at fixed-target experiments with center-of-mass energies up to about 40 GeV. The ATLAS results are compatible with the extrapolation of a fit from previous measurements to the x F range covered by this measurement.« less
Right-sizing statistical models for longitudinal data.
Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M
2015-12-01
Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. (c) 2015 APA, all rights reserved).
Psychometric analysis of the Brisbane Practice Environment Measure (B-PEM).
Flint, Anndrea; Farrugia, Charles; Courtney, Mary; Webster, Joan
2010-03-01
To undertake rigorous psychometric testing of the newly developed contemporary work environment measure (the Brisbane Practice Environment Measure [B-PEM]) using exploratory factor analysis and confirmatory factor analysis. Content validity of the 33-item measure was established by a panel of experts. Initial testing involved 195 nursing staff using principal component factor analysis with varimax rotation (orthogonal) and Cronbach's alpha coefficients. Confirmatory factor analysis was conducted using data from a further 983 nursing staff. Principal component factor analysis yielded a four-factor solution with eigenvalues greater than 1 that explained 52.53% of the variance. These factors were then verified using confirmatory factor analysis. Goodness-of-fit indices showed an acceptable fit overall with the full model, explaining 21% to 73% of the variance. Deletion of items took place throughout the evolution of the instrument, resulting in a 26-item, four-factor measure called the Brisbane Practice Environment Measure-Tested. The B-PEM has undergone rigorous psychometric testing, providing evidence of internal consistency and goodness-of-fit indices within acceptable ranges. The measure can be utilised as a subscale or total score reflective of a contemporary nursing work environment. An up-to-date instrument to measure practice environment may be useful for nursing leaders to monitor the workplace and to assist in identifying areas for improvement, facilitating greater job satisfaction and retention.
Zullig, Keith J; Collins, Rani; Ghani, Nadia; Patton, Jon M; Scott Huebner, E; Ajamie, Jean
2014-02-01
The School Climate Measure (SCM) was developed and validated in 2010 in response to a dearth of psychometrically sound school climate instruments. This study sought to further validate the SCM on a large, diverse sample of Arizona public school adolescents (N = 20,953). Four SCM domains (positive student-teacher relationships, academic support, order and discipline, and physical environment) were available for the analysis. Confirmatory factor analysis and structural equation modeling were established to construct validity, and criterion-related validity was assessed via selected Youth Risk Behavior Survey (YRBS) school safety items and self-reported grade (GPA) point average. Analyses confirmed the 4 SCM school climate domains explained approximately 63% of the variance (factor loading range .45-.92). Structural equation models fit the data well χ(2) = 14,325 (df = 293, p < .001), comparative fit index (CFI) = .951, Tuker-Lewis index (TLI) = .952, root mean square error of approximation (RMSEA) = .05). The goodness-of-fit index was .940. Coefficient alphas ranged from .82 to .93. Analyses of variance with post hoc comparisons suggested the SCM domains related in hypothesized directions with the school safety items and GPA. Additional evidence supports the validity and reliability of the SCM. Measures, such as the SCM, can facilitate data-driven decisions and may be incorporated into evidenced-based processes designed to improve student outcomes. © 2014, American School Health Association.
Physical activity among adults with obesity: testing the Health Action Process Approach.
Parschau, Linda; Barz, Milena; Richert, Jana; Knoll, Nina; Lippke, Sonia; Schwarzer, Ralf
2014-02-01
This study tested the applicability of the Health Action Process Approach (HAPA) in a sample of obese adults in the context of physical activity. Physical activity was assessed along with motivational and volitional variables specified in the HAPA (motivational self-efficacy, outcome expectancies, risk perception, intention, maintenance self-efficacy, action planning, coping planning, recovery self-efficacy, social support) in a sample of 484 obese men and women (body mass index ≥ 30 kg/m2). Applying structural equation modeling, the fit of the HAPA model was satisfactory-χ²(191) = 569.93, p < .05, χ²/df = 2.98, comparative fit index = .91, normed-fit index = .87, and root mean square error of approximation = .06 (90% CI = .06, .07)-explaining 30% of the variance in intention and 18% of the variance in physical activity. Motivational self-efficacy, outcome expectancies, and social support were related to intention. An association between maintenance self-efficacy and coping planning was found. Recovery self-efficacy and social support were associated with physical activity. No relationships were found between risk perception and intention and between planning and physical activity. The assumptions derived from the HAPA were partly confirmed and the HAPA may, therefore, constitute a theoretical backdrop for intervention designs to promote physical activity in adults with obesity. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Richard, Antoine; Dionne, Mélanie; Wang, Jinliang; Bernatchez, Louis
2013-01-01
In this study, we documented the breeding system of a wild population of Atlantic salmon (Salmo salar L.) by genetically sampling every returning adult and assessed the determinants of individual fitness. We then quantified the impacts of catch and release (C&R) on mating and reproductive success. Both sexes showed high variance in individual reproductive success, and the estimated standardized variance was higher for males (2.86) than for females (0.73). We found a weak positive relationship between body size and fitness and observed that fitness was positively correlated with the number of mates, especially in males. Mature male parr sired 44% of the analysed offspring. The impact of C&R on the number of offspring was size dependent, as the reproductive success of larger fish was more impaired than smaller ones. Also, there was an interactive negative effect of water temperature and air exposure time on reproductive success of C&R salmon. This study improves our understanding of the complex reproductive biology of the Atlantic salmon and is the first to investigate the impact of C&R on reproductive success. Our study expands the management toolbox of appropriate C&R practices that promote conservation of salmon populations and limit negative impacts on mating and reproductive success. © 2012 Blackwell Publishing Ltd.
Applying Statistics in the Undergraduate Chemistry Laboratory: Experiments with Food Dyes.
ERIC Educational Resources Information Center
Thomasson, Kathryn; Lofthus-Merschman, Sheila; Humbert, Michelle; Kulevsky, Norman
1998-01-01
Describes several experiments to teach different aspects of the statistical analysis of data using household substances and a simple analysis technique. Each experiment can be performed in three hours. Students learn about treatment of spurious data, application of a pooled variance, linear least-squares fitting, and simultaneous analysis of dyes…
Factor Analysis of the Aberrant Behavior Checklist in Individuals with Autism Spectrum Disorders
ERIC Educational Resources Information Center
Brinkley, Jason; Nations, Laura; Abramson, Ruth K.; Hall, Alicia; Wright, Harry H.; Gabriels, Robin; Gilbert, John R.; Pericak-Vance, Margaret A. O.; Cuccaro, Michael L.
2007-01-01
Exploratory factor analysis (varimax and promax rotations) of the aberrant behavior checklist-community version (ABC) in 275 individuals with Autism spectrum disorder (ASD) identified four- and five-factor solutions which accounted for greater than 70% of the variance. Confirmatory factor analysis (Lisrel 8.7) revealed indices of moderate fit for…
Factor Covariance Analysis in Subgroups.
ERIC Educational Resources Information Center
Pennell, Roger
The problem considered is that of an investigator sampling two or more correlation matrices and desiring to fit a model where a factor pattern matrix is assumed to be identical across samples and we need to estimate only the factor covariance matrix and the unique variance for each sample. A flexible, least squares solution is worked out and…
Xu, Li; Jiang, Yong; Qiu, Rong
2018-01-01
In present study, co-pyrolysis behavior of rape straw, waste tire and their various blends were investigated. TG-FTIR indicated that co-pyrolysis was characterized by a four-step reaction, and H 2 O, CH, OH, CO 2 and CO groups were the main products evolved during the process. Additionally, using BBD-based experimental results, best-fit multiple regression models with high R 2 -pred values (94.10% for mass loss and 95.37% for reaction heat), which correlated explanatory variables with the responses, were presented. The derived models were analyzed by ANOVA at 95% confidence interval, F-test, lack-of-fit test and residues normal probability plots implied the models described well the experimental data. Finally, the model uncertainties as well as the interactive effect of these parameters were studied, the total-, first- and second-order sensitivity indices of operating factors were proposed using Sobol' variance decomposition. To the authors' knowledge, this is the first time global parameter sensitivity analysis has been performed in (co-)pyrolysis literature. Copyright © 2017 Elsevier Ltd. All rights reserved.
Search Site submit Feynman Center for Innovation Los Alamos National Laboratory Collaboration for Explosives Detection Los Alamos National Laboratory Los Alamos Collaboration for Explosives Detection Menu is built upon Los Alamos' unparalleled explosive detection capabilities derived from the expertise of
Some applications of categorical data analysis to epidemiological studies.
Grizzle, J E; Koch, G G
1979-01-01
Several examples of categorized data from epidemiological studies are analyzed to illustrate that more informative analysis than tests of independence can be performed by fitting models. All of the analyses fit into a unified conceptual framework that can be performed by weighted least squares. The methods presented show how to calculate point estimate of parameters, asymptotic variances, and asymptotically valid chi 2 tests. The examples presented are analysis of relative risks estimated from several 2 x 2 tables, analysis of selected features of life tables, construction of synthetic life tables from cross-sectional studies, and analysis of dose-response curves. PMID:540590
Development of a winter wheat adjustable crop calendar model
NASA Technical Reports Server (NTRS)
Baker, J. R. (Principal Investigator)
1978-01-01
The author has identified the following significant results. After parameter estimation, tests were conducted with variances from the fits, and on independent data. From these tests, it was generally concluded that exponential functions have little advantage over polynomials. Precipitation was not found to significantly affect the fits. The Robertson's triquadratic form, in general use for spring wheat, was found to show promise for winter wheat, but special techniques and care were required for its use. In most instances, equations with nonlinear effects were found to yield erratic results when utilized with daily environmental values as independent variables.
A general reconstruction of the recent expansion history of the universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vitenti, S.D.P.; Penna-Lima, M., E-mail: dias@iap.fr, E-mail: pennal@apc.in2p3.fr
Distance measurements are currently the most powerful tool to study the expansion history of the universe without specifying its matter content nor any theory of gravitation. Assuming only an isotropic, homogeneous and flat universe, in this work we introduce a model-independent method to reconstruct directly the deceleration function via a piecewise function. Including a penalty factor, we are able to vary continuously the complexity of the deceleration function from a linear case to an arbitrary (n+1)-knots spline interpolation. We carry out a Monte Carlo (MC) analysis to determine the best penalty factor, evaluating the bias-variance trade-off, given the uncertainties ofmore » the SDSS-II and SNLS supernova combined sample (JLA), compilations of baryon acoustic oscillation (BAO) and H(z) data. The bias-variance analysis is done for three fiducial models with different features in the deceleration curve. We perform the MC analysis generating mock catalogs and computing their best-fit. For each fiducial model, we test different reconstructions using, in each case, more than 10{sup 4} catalogs in a total of about 5× 10{sup 5}. This investigation proved to be essential in determining the best reconstruction to study these data. We show that, evaluating a single fiducial model, the conclusions about the bias-variance ratio are misleading. We determine the reconstruction method in which the bias represents at most 10% of the total uncertainty. In all statistical analyses, we fit the coefficients of the deceleration function along with four nuisance parameters of the supernova astrophysical model. For the full sample, we also fit H{sub 0} and the sound horizon r{sub s}(z{sub d}) at the drag redshift. The bias-variance trade-off analysis shows that, apart from the deceleration function, all other estimators are unbiased. Finally, we apply the Ensemble Sampler Markov Chain Monte Carlo (ESMCMC) method to explore the posterior of the deceleration function up to redshift 1.3 (using only JLA) and 2.3 (JLA+BAO+H(z)). We obtain that the standard cosmological model agrees within 3σ level with the reconstructed results in the whole studied redshift intervals. Since our method is calibrated to minimize the bias, the error bars of the reconstructed functions are a good approximation for the total uncertainty.« less
NASA Astrophysics Data System (ADS)
Rexer, Moritz; Hirt, Christian
2015-09-01
Classical degree variance models (such as Kaula's rule or the Tscherning-Rapp model) often rely on low-resolution gravity data and so are subject to extrapolation when used to describe the decay of the gravity field at short spatial scales. This paper presents a new degree variance model based on the recently published GGMplus near-global land areas 220 m resolution gravity maps (Geophys Res Lett 40(16):4279-4283, 2013). We investigate and use a 2D-DFT (discrete Fourier transform) approach to transform GGMplus gravity grids into degree variances. The method is described in detail and its approximation errors are studied using closed-loop experiments. Focus is placed on tiling, azimuth averaging, and windowing effects in the 2D-DFT method and on analytical fitting of degree variances. Approximation errors of the 2D-DFT procedure on the (spherical harmonic) degree variance are found to be at the 10-20 % level. The importance of the reference surface (sphere, ellipsoid or topography) of the gravity data for correct interpretation of degree variance spectra is highlighted. The effect of the underlying mass arrangement (spherical or ellipsoidal approximation) on the degree variances is found to be crucial at short spatial scales. A rule-of-thumb for transformation of spectra between spherical and ellipsoidal approximation is derived. Application of the 2D-DFT on GGMplus gravity maps yields a new degree variance model to degree 90,000. The model is supported by GRACE, GOCE, EGM2008 and forward-modelled gravity at 3 billion land points over all land areas within the SRTM data coverage and provides gravity signal variances at the surface of the topography. The model yields omission errors of 9 mGal for gravity (1.5 cm for geoid effects) at scales of 10 km, 4 mGal (1 mm) at 2-km scales, and 2 mGal (0.2 mm) at 1-km scales.
NASA Astrophysics Data System (ADS)
Malanson, G. P.; DeRose, R. J.; Bekker, M. F.
2016-12-01
The consequences of increasing climatic variance while including variability among individuals and populations are explored for range margins of species with a spatially explicit simulation. The model has a single environmental gradient and a single species then extended to two species. Species response to the environment is a Gaussian function with a peak of 1.0 at their peak fitness on the gradient. The variance in the environment is taken from the total variance in the tree ring series of 399 individuals of Pinus edulis in FIA plots in the western USA. The variability is increased by a multiplier of the standard deviation for various doubling times. The variance of individuals in the simulation is drawn from these same series. Inheritance of individual variability is based on the geographic locations of the individuals. The variance for P. edulis is recomputed as time-dependent conditional standard deviations using the GARCH procedure. Establishment and mortality are simulated in a Monte Carlo process with individual variance. Variance for P. edulis does not show a consistent pattern of heteroscedasticity. An obvious result is that increasing variance has deleterious effects on species persistence because extreme events that result in extinctions cannot be balanced by positive anomalies, but even less extreme negative events cannot be balanced by positive anomalies because of biological and spatial constraints. In the two species model the superior competitor is more affected by increasing climatic variance because its response function is steeper at the point of intersection with the other species and so the uncompensated effects of negative anomalies are greater for it. These theoretical results can guide the anticipated need to mitigate the effects of increasing climatic variability on P. edulis range margins. The trailing edge, here subject to increasing drought stress with increasing temperatures, will be more affected by negative anomalies.
Efficient scheme for parametric fitting of data in arbitrary dimensions.
Pang, Ning-Ning; Tzeng, Wen-Jer; Kao, Hisen-Ching
2008-07-01
We propose an efficient scheme for parametric fitting expressed in terms of the Legendre polynomials. For continuous systems, our scheme is exact and the derived explicit expression is very helpful for further analytical studies. For discrete systems, our scheme is almost as accurate as the method of singular value decomposition. Through a few numerical examples, we show that our algorithm costs much less CPU time and memory space than the method of singular value decomposition. Thus, our algorithm is very suitable for a large amount of data fitting. In addition, the proposed scheme can also be used to extract the global structure of fluctuating systems. We then derive the exact relation between the correlation function and the detrended variance function of fluctuating systems in arbitrary dimensions and give a general scaling analysis.
A nonparametric smoothing method for assessing GEE models with longitudinal binary data.
Lin, Kuo-Chin; Chen, Yi-Ju; Shyr, Yu
2008-09-30
Studies involving longitudinal binary responses are widely applied in the health and biomedical sciences research and frequently analyzed by generalized estimating equations (GEE) method. This article proposes an alternative goodness-of-fit test based on the nonparametric smoothing approach for assessing the adequacy of GEE fitted models, which can be regarded as an extension of the goodness-of-fit test of le Cessie and van Houwelingen (Biometrics 1991; 47:1267-1282). The expectation and approximate variance of the proposed test statistic are derived. The asymptotic distribution of the proposed test statistic in terms of a scaled chi-squared distribution and the power performance of the proposed test are discussed by simulation studies. The testing procedure is demonstrated by two real data. Copyright (c) 2008 John Wiley & Sons, Ltd.
Modular operads and the quantum open-closed homotopy algebra
NASA Astrophysics Data System (ADS)
Doubek, Martin; Jurčo, Branislav; Münster, Korbinian
2015-12-01
We verify that certain algebras appearing in string field theory are algebras over Feynman transform of modular operads which we describe explicitly. Equivalent description in terms of solutions of generalized BV master equations are explained from the operadic point of view.
NASA Astrophysics Data System (ADS)
Moster, Benjamin P.; Somerville, Rachel S.; Newman, Jeffrey A.; Rix, Hans-Walter
2011-04-01
Deep pencil beam surveys (<1 deg2) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by "cosmic variance." This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift \\bar{z} and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, \\bar{z}, Δz, and stellar mass m *. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at \\bar{z}=2 and with Δz = 0.5, the relative cosmic variance of galaxies with m *>1011 M sun is ~38%, while it is ~27% for GEMS and ~12% for COSMOS. For galaxies of m * ~ 1010 M sun, the relative cosmic variance is ~19% for GOODS, ~13% for GEMS, and ~6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at \\bar{z}=2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic variance is less serious.
Brügemann, K; Gernand, E; von Borstel, U U; König, S
2011-08-01
Data used in the present study included 1,095,980 first-lactation test-day records for protein yield of 154,880 Holstein cows housed on 196 large-scale dairy farms in Germany. Data were recorded between 2002 and 2009 and merged with meteorological data from public weather stations. The maximum distance between each farm and its corresponding weather station was 50 km. Hourly temperature-humidity indexes (THI) were calculated using the mean of hourly measurements of dry bulb temperature and relative humidity. On the phenotypic scale, an increase in THI was generally associated with a decrease in daily protein yield. For genetic analyses, a random regression model was applied using time-dependent (d in milk, DIM) and THI-dependent covariates. Additive genetic and permanent environmental effects were fitted with this random regression model and Legendre polynomials of order 3 for DIM and THI. In addition, the fixed curve was modeled with Legendre polynomials of order 3. Heterogeneous residuals were fitted by dividing DIM into 5 classes, and by dividing THI into 4 classes, resulting in 20 different classes. Additive genetic variances for daily protein yield decreased with increasing degrees of heat stress and were lowest at the beginning of lactation and at extreme THI. Due to higher additive genetic variances, slightly higher permanent environment variances, and similar residual variances, heritabilities were highest for low THI in combination with DIM at the end of lactation. Genetic correlations among individual values for THI were generally >0.90. These trends from the complex random regression model were verified by applying relatively simple bivariate animal models for protein yield measured in 2 THI environments; that is, defining a THI value of 60 as a threshold. These high correlations indicate the absence of any substantial genotype × environment interaction for protein yield. However, heritabilities and additive genetic variances from the random regression model tended to be slightly higher in the THI range corresponding to cows' comfort zone. Selecting such superior environments for progeny testing can contribute to an accurate genetic differentiation among selection candidates. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Bignardi, A B; El Faro, L; Cardoso, V L; Machado, P F; Albuquerque, L G
2009-09-01
The objective of the present study was to estimate milk yield genetic parameters applying random regression models and parametric correlation functions combined with a variance function to model animal permanent environmental effects. A total of 152,145 test-day milk yields from 7,317 first lactations of Holstein cows belonging to herds located in the southeastern region of Brazil were analyzed. Test-day milk yields were divided into 44 weekly classes of days in milk. Contemporary groups were defined by herd-test-day comprising a total of 2,539 classes. The model included direct additive genetic, permanent environmental, and residual random effects. The following fixed effects were considered: contemporary group, age of cow at calving (linear and quadratic regressions), and the population average lactation curve modeled by fourth-order orthogonal Legendre polynomial. Additive genetic effects were modeled by random regression on orthogonal Legendre polynomials of days in milk, whereas permanent environmental effects were estimated using a stationary or nonstationary parametric correlation function combined with a variance function of different orders. The structure of residual variances was modeled using a step function containing 6 variance classes. The genetic parameter estimates obtained with the model using a stationary correlation function associated with a variance function to model permanent environmental effects were similar to those obtained with models employing orthogonal Legendre polynomials for the same effect. A model using a sixth-order polynomial for additive effects and a stationary parametric correlation function associated with a seventh-order variance function to model permanent environmental effects would be sufficient for data fitting.
Baird, Rachel; Maxwell, Scott E
2016-06-01
Time-varying predictors in multilevel models are a useful tool for longitudinal research, whether they are the research variable of interest or they are controlling for variance to allow greater power for other variables. However, standard recommendations to fix the effect of time-varying predictors may make an assumption that is unlikely to hold in reality and may influence results. A simulation study illustrates that treating the time-varying predictor as fixed may allow analyses to converge, but the analyses have poor coverage of the true fixed effect when the time-varying predictor has a random effect in reality. A second simulation study shows that treating the time-varying predictor as random may have poor convergence, except when allowing negative variance estimates. Although negative variance estimates are uninterpretable, results of the simulation show that estimates of the fixed effect of the time-varying predictor are as accurate for these cases as for cases with positive variance estimates, and that treating the time-varying predictor as random and allowing negative variance estimates performs well whether the time-varying predictor is fixed or random in reality. Because of the difficulty of interpreting negative variance estimates, 2 procedures are suggested for selection between fixed-effect and random-effect models: comparing between fixed-effect and constrained random-effect models with a likelihood ratio test or fitting a fixed-effect model when an unconstrained random-effect model produces negative variance estimates. The performance of these 2 procedures is compared. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geslot, Benoit; Pepino, Alexandra; Blaise, Patrick
A pile noise measurement campaign has been conducted by the CEA in the VENUS-F reactor (SCK-CEN, Mol Belgium) in April 2011 in the reference critical configuration of the GUINEVERE experimental program. The experimental setup made it possible to estimate the core kinetic parameters: the prompt neutron decay constant, the delayed neutron fraction and the generation time. A precise assessment of these constants is of prime importance. In particular, the effective delayed neutron fraction is used to normalize and compare calculated reactivities of different subcritical configurations, obtained by modifying either the core layout or the control rods position, with experimental onesmore » deduced from the analysis of measurements. This paper presents results obtained with a CEA-developed time stamping acquisition system. Data were analyzed using Rossi-α and Feynman-α methods. Results were normalized to reactor power using a calibrated fission chamber with a deposit of Np-237. Calculated factors were necessary to the analysis: the Diven factor was computed by the ENEA (Italy) and the power calibration factor by the CNRS/IN2P3/LPC Caen. Results deduced with both methods are consistent with respect to calculated quantities. Recommended values are given by the Rossi-α estimator, that was found to be the most robust. The neutron generation time was found equal to 0.438 ± 0.009 μs and the effective delayed neutron fraction is 765 ± 8 pcm. Discrepancies with the calculated value (722 pcm, calculation from ENEA) are satisfactory: -5.6% for the Rossi-α estimate and -2.7% for the Feynman-α estimate. (authors)« less
Salecker-Wigner-Peres clock, Feynman paths, and a tunneling time that should not exist
NASA Astrophysics Data System (ADS)
Sokolovski, D.
2017-08-01
The Salecker-Wigner-Peres (SWP) clock is often used to determine the duration a quantum particle is supposed to spend in a specified region of space Ω . By construction, the result is a real positive number, and the method seems to avoid the difficulty of introducing complex time parameters, which arises in the Feynman paths approach. However, it tells little about the particle's motion. We investigate this matter further, and show that the SWP clock, like any other Larmor clock, correlates the rotation of its angular momentum with the durations τ , which the Feynman paths spend in Ω , thereby destroying interference between different durations. An inaccurate weakly coupled clock leaves the interference almost intact, and the need to resolve the resulting "which way?" problem is one of the main difficulties at the center of the "tunnelling time" controversy. In the absence of a probability distribution for the values of τ , the SWP results are expressed in terms of moduli of the "complex times," given by the weighted sums of the corresponding probability amplitudes. It is shown that overinterpretation of these results, by treating the SWP times as physical time intervals, leads to paradoxes and should be avoided. We also analyze various settings of the SWP clock, different calibration procedures, and the relation between the SWP results and the quantum dwell time. The cases of stationary tunneling and tunnel ionization are considered in some detail. Although our detailed analysis addresses only one particular definition of the duration of a tunneling process, it also points towards the impossibility of uniting various time parameters, which may occur in quantum theory, within the concept of a single tunnelling time.
The dynamics of adapting, unregulated populations and a modified fundamental theorem.
O'Dwyer, James P
2013-01-06
A population in a novel environment will accumulate adaptive mutations over time, and the dynamics of this process depend on the underlying fitness landscape: the fitness of and mutational distance between possible genotypes in the population. Despite its fundamental importance for understanding the evolution of a population, inferring this landscape from empirical data has been problematic. We develop a theoretical framework to describe the adaptation of a stochastic, asexual, unregulated, polymorphic population undergoing beneficial, neutral and deleterious mutations on a correlated fitness landscape. We generate quantitative predictions for the change in the mean fitness and within-population variance in fitness over time, and find a simple, analytical relationship between the distribution of fitness effects arising from a single mutation, and the change in mean population fitness over time: a variant of Fisher's 'fundamental theorem' which explicitly depends on the form of the landscape. Our framework can therefore be thought of in three ways: (i) as a set of theoretical predictions for adaptation in an exponentially growing phase, with applications in pathogen populations, tumours or other unregulated populations; (ii) as an analytically tractable problem to potentially guide theoretical analysis of regulated populations; and (iii) as a basis for developing empirical methods to infer general features of a fitness landscape.
Zhang, Xu-Sheng; Hill, William G
2002-01-01
In quantitative genetics, there are two basic "conflicting" observations: abundant polygenic variation and strong stabilizing selection that should rapidly deplete that variation. This conflict, although having attracted much theoretical attention, still stands open. Two classes of model have been proposed: real stabilizing selection directly on the metric trait under study and apparent stabilizing selection caused solely by the deleterious pleiotropic side effects of mutations on fitness. Here these models are combined and the total stabilizing selection observed is assumed to derive simultaneously through these two different mechanisms. Mutations have effects on a metric trait and on fitness, and both effects vary continuously. The genetic variance (V(G)) and the observed strength of total stabilizing selection (V(s,t)) are analyzed with a rare-alleles model. Both kinds of selection reduce V(G) but their roles in depleting it are not independent: The magnitude of pleiotropic selection depends on real stabilizing selection and such dependence is subject to the shape of the distributions of mutational effects. The genetic variation maintained thus depends on the kurtosis as well as the variance of mutational effects: All else being equal, V(G) increases with increasing leptokurtosis of mutational effects on fitness, while for a given distribution of mutational effects on fitness, V(G) decreases with increasing leptokurtosis of mutational effects on the trait. The V(G) and V(s,t) are determined primarily by real stabilizing selection while pleiotropic effects, which can be large, have only a limited impact. This finding provides some promise that a high heritability can be explained under strong total stabilizing selection for what are regarded as typical values of mutation and selection parameters. PMID:12242254
A Note on Sample Size and Solution Propriety for Confirmatory Factor Analytic Models
ERIC Educational Resources Information Center
Jackson, Dennis L.; Voth, Jennifer; Frey, Marc P.
2013-01-01
Determining an appropriate sample size for use in latent variable modeling techniques has presented ongoing challenges to researchers. In particular, small sample sizes are known to present concerns over sampling error for the variances and covariances on which model estimation is based, as well as for fit indexes and convergence failures. The…
(NOTE LOCATION) - One West Speaker: Regis Kopper, Duke University Title: Understanding the Benefits of DATE) - One West Speaker: Regina Demina, University of Rochester Title: Top Forward-Backward Asymmetry I play is a very interesting one," says Nobel Laureate Richard Feynman in a low-resolution
The Coupling of Gravity to Spin and Electromagnetism
NASA Astrophysics Data System (ADS)
Finster, Felix; Smoller, Joel; Yau, Shing-Tung
The coupled Einstein-Dirac-Maxwell equations are considered for a static, spherically symmetric system of two fermions in a singlet spinor state. Stable soliton-like solutions are shown to exist, and we discuss the regularizing effect of gravity from a Feynman diagram point of view.
NASA Astrophysics Data System (ADS)
Rabemananajara, Tanjona R.; Horowitz, W. A.
2017-09-01
To make predictions for the particle physics processes, one has to compute the cross section of the specific process as this is what one can measure in a modern collider experiment such as the Large Hadron Collider (LHC) at CERN. Theoretically, it has been proven to be extremely difficult to compute scattering amplitudes using conventional methods of Feynman. Calculations with Feynman diagrams are realizations of a perturbative expansion and when doing calculations one has to set up all topologically different diagrams, for a given process up to a given order of coupling in the theory. This quickly makes the calculation of scattering amplitudes a hot mess. Fortunately, one can simplify calculations by considering the helicity amplitude for the Maximally Helicity Violating (MHV). This can be extended to the formalism of on-shell recursion, which is able to derive, in a much simpler way the expression of a high order scattering amplitude from lower orders.
NASA Astrophysics Data System (ADS)
Barnea, A. Ronny; Cheshnovsky, Ori; Even, Uzi
2018-02-01
Interference experiments have been paramount in our understanding of quantum mechanics and are frequently the basis of testing the superposition principle in the framework of quantum theory. In recent years, several studies have challenged the nature of wave-function interference from the perspective of Born's rule—namely, the manifestation of so-called high-order interference terms in a superposition generated by diffraction of the wave functions. Here we present an experimental test of multipath interference in the diffraction of metastable helium atoms, with large-number counting statistics, comparable to photon-based experiments. We use a variation of the original triple-slit experiment and accurate single-event counting techniques to provide a new experimental bound of 2.9 ×10-5 on the statistical deviation from the commonly approximated null third-order interference term in Born's rule for matter waves. Our value is on the order of the maximal contribution predicted for multipath trajectories by Feynman path integrals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jing-Yuan, E-mail: chjy@uchicago.edu; Stanford Institute for Theoretical Physics, Stanford University, CA 94305; Son, Dam Thanh, E-mail: dtson@uchicago.edu
We develop an extension of the Landau Fermi liquid theory to systems of interacting fermions with non-trivial Berry curvature. We propose a kinetic equation and a constitutive relation for the electromagnetic current that together encode the linear response of such systems to external electromagnetic perturbations, to leading and next-to-leading orders in the expansion over the frequency and wave number of the perturbations. We analyze the Feynman diagrams in a large class of interacting quantum field theories and show that, after summing up all orders in perturbation theory, the current–current correlator exactly matches with the result obtained from the kinetic theory.more » - Highlights: • We extend Landau’s kinetic theory of Fermi liquid to incorporate Berry phase. • Berry phase effects in Fermi liquid take exactly the same form as in Fermi gas. • There is a new “emergent electric dipole” contribution to the anomalous Hall effect. • Our kinetic theory is matched to field theory to all orders in Feynman diagrams.« less
Interactions as intertwiners in 4D QFT
NASA Astrophysics Data System (ADS)
de Mello Koch, Robert; Ramgoolam, Sanjaye
2016-03-01
In a recent paper we showed that the correlators of free scalar field theory in four dimensions can be constructed from a two dimensional topological field theory based on so(4 , 2) equivariant maps (intertwiners). The free field result, along with recent results of Frenkel and Libine on equivariance properties of Feynman integrals, are developed further in this paper. We show that the coefficient of the log term in the 1-loop 4-point conformal integral is a projector in the tensor product of so(4 , 2) representations. We also show that the 1-loop 4-point integral can be written as a sum of four terms, each associated with the quantum equation of motion for one of the four external legs. The quantum equation of motion is shown to be related to equivariant maps involving indecomposable representations of so(4 , 2), a phenomenon which illuminates multiplet recombination. The harmonic expansion method for Feynman integrals is a powerful tool for arriving at these results. The generalization to other interactions and higher loops is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coffey, Mark W.
2008-04-15
Perturbative quantum field theory for the Ising model at the three-loop level yields a tetrahedral Feynman diagram C(a,b) with masses a and b and four other lines with unit mass. The completely symmetric tetrahedron C{sup Tet}{identical_to}C(1,1) has been of interest from many points of view, with several representations and conjectures having been given in the literature. We prove a conjectured exponentially fast convergent sum for C(1,1), as well as a previously empirical relation for C(1,1) as a remarkable difference of Clausen function values. Our presentation includes propositions extending the theory of the dilogarithm Li{sub 2} and Clausen Cl{sub 2} functions,more » as well as their relation to other special functions of mathematical physics. The results strengthen connections between Feynman diagram integrals, volumes in hyperbolic space, number theory, and special functions and numbers, specifically including dilogarithms, Clausen function values, and harmonic numbers.« less
Stochastic, real-space, imaginary-time evaluation of third-order Feynman-Goldstone diagrams
NASA Astrophysics Data System (ADS)
Willow, Soohaeng Yoo; Hirata, So
2014-01-01
A new, alternative set of interpretation rules of Feynman-Goldstone diagrams for many-body perturbation theory is proposed, which translates diagrams into algebraic expressions suitable for direct Monte Carlo integrations. A vertex of a diagram is associated with a Coulomb interaction (rather than a two-electron integral) and an edge with the trace of a Green's function in real space and imaginary time. With these, 12 diagrams of third-order many-body perturbation (MP3) theory are converted into 20-dimensional integrals, which are then evaluated by a Monte Carlo method. It uses redundant walkers for convergence acceleration and a weight function for importance sampling in conjunction with the Metropolis algorithm. The resulting Monte Carlo MP3 method has low-rank polynomial size dependence of the operation cost, a negligible memory cost, and a naturally parallel computational kernel, while reproducing the correct correlation energies of small molecules within a few mEh after 106 Monte Carlo steps.
NASA Astrophysics Data System (ADS)
Accioly, Antonio; Helayël-Neto, José; Barone, F. E.; Herdy, Wallace
2015-02-01
A straightforward prescription for computing the D-dimensional potential energy of gravitational models, which is strongly based on the Feynman path integral, is built up. Using this method, the static potential energy for the interaction of two masses is found in the context of D-dimensional higher-derivative gravity models, and its behavior is analyzed afterwards in both ultraviolet and infrared regimes. As a consequence, two new gravity systems in which the potential energy is finite at the origin, respectively, in D = 5 and D = 6, are found. Since the aforementioned prescription is equivalent to that based on the marriage between quantum mechanics (to leading order, i.e., in the first Born approximation) and the nonrelativistic limit of quantum field theory, and bearing in mind that the latter relies basically on the calculation of the nonrelativistic Feynman amplitude ({{M}NR}), a trivial expression for computing {{M}NR} is obtained from our prescription as an added bonus.
NASA Astrophysics Data System (ADS)
Rahim, K. J.; Cumming, B. F.; Hallett, D. J.; Thomson, D. J.
2007-12-01
An accurate assessment of historical local Holocene data is important in making future climate predictions. Holocene climate is often obtained through proxy measures such as diatoms or pollen using radiocarbon dating. Wiggle Match Dating (WMD) uses an iterative least squares approach to tune a core with a large amount of 14C dates to the 14C calibration curve. This poster will present a new method of tuning a time series with when only a modest number of 14C dates are available. The method presented uses the multitaper spectral estimation, and it specifically makes use of a multitaper spectral coherence tuning technique. Holocene climate reconstructions are often based on a simple depth-time fit such as a linear interpolation, splines, or low order polynomials. Many of these models make use of only a small number of 14C dates, each of which is a point estimate with a significant variance. This technique attempts to tune the 14C dates to a reference series, such as tree rings, varves, or the radiocarbon calibration curve. The amount of 14C in the atmosphere is not constant, and a significant source of variance is solar activity. A decrease in solar activity coincides with an increase in cosmogenic isotope production, and an increase in cosmogenic isotope production coincides with a decrease in temperature. The method presented uses multitaper coherence estimates and adjusts the phase of the time series to line up significant line components with that of the reference series in attempt to obtain a better depth-time fit then the original model. Given recent concerns and demonstrations of the variation in estimated dates from radiocarbon labs, methods to confirm and tune the depth-time fit can aid climate reconstructions by improving and serving to confirm the accuracy of the underlying depth-time fit. Climate reconstructions can then be made on the improved depth-time fit. This poster presents a run though of this process using Chauvin Lake in the Canadian prairies and Mt. Barr Cirque Lake located in British Columbia as examples.
NASA Astrophysics Data System (ADS)
Moyer, P. A.; Boettcher, M. S.; McGuire, J. J.; Collins, J. A.
2015-12-01
On Gofar transform fault on the East Pacific Rise (EPR), Mw ~6.0 earthquakes occur every ~5 years and repeatedly rupture the same asperity (rupture patch), while the intervening fault segments (rupture barriers to the largest events) only produce small earthquakes. In 2008, an ocean bottom seismometer (OBS) deployment successfully captured the end of a seismic cycle, including an extensive foreshock sequence localized within a 10 km rupture barrier, the Mw 6.0 mainshock and its aftershocks that occurred in a ~10 km rupture patch, and an earthquake swarm located in a second rupture barrier. Here we investigate whether the inferred variations in frictional behavior along strike affect the rupture processes of 3.0 < M < 4.5 earthquakes by determining source parameters for 100 earthquakes recorded during the OBS deployment.Using waveforms with a 50 Hz sample rate from OBS accelerometers, we calculate stress drop using an omega-squared source model, where the weighted average corner frequency is derived from an empirical Green's function (EGF) method. We obtain seismic moment by fitting the omega-squared source model to the low frequency amplitude of individual spectra and account for attenuation using Q obtained from a velocity model through the foreshock zone. To ensure well-constrained corner frequencies, we require that the Brune [1970] model provides a statistically better fit to each spectral ratio than a linear model and that the variance is low between the data and model. To further ensure that the fit to the corner frequency is not influenced by resonance of the OBSs, we require a low variance close to the modeled corner frequency. Error bars on corner frequency were obtained through a grid search method where variance is within 10% of the best-fit value. Without imposing restrictive selection criteria, slight variations in corner frequencies from rupture patches and rupture barriers are not discernable. Using well-constrained source parameters, we find an average stress drop of 5.7 MPa in the aftershock zone compared to values of 2.4 and 2.9 MPa in the foreshock and swarm zones respectively. The higher stress drops in the rupture patch compared to the rupture barriers reflect systematic differences in along strike fault zone properties on Gofar transform fault.
Reduced arterial stiffness in very fit boys and girls.
Weberruß, Heidi; Pirzer, Raphael; Schulz, Thorsten; Böhm, Birgit; Dalla Pozza, Robert; Netz, Heinrich; Oberhoffer, Renate
2017-01-01
Low cardiorespiratory fitness is associated with higher cardiovascular risk, whereas high levels of cardiorespiratory fitness protect the cardiovascular system. Carotid intima-media thickness and arterial distensibility are well-established parameters to identify subclinical cardiovascular disease. Therefore, this study investigated the influence of cardiorespiratory fitness and muscular strength on carotid intima-media thickness and arterial distensibility in 697 children and adolescents (376 girls), aged 7-17 years. Cardiorespiratory fitness and strength were measured with the test battery FITNESSGRAM; carotid intima-media thickness, arterial compliance, elastic modulus, stiffness index β, and pulse wave velocity β were assessed by B- and M-mode ultrasound at the common carotid artery. In bivariate correlation, cardiorespiratory fitness was significantly associated with all cardiovascular parameters and was an independent predictor in multivariate regression analysis. No significant associations were obtained for muscular strength. In a one-way variance analysis, very fit boys and girls (58 boys and 74 girls>80th percentile for cardiorespiratory fitness) had significantly decreased stiffness parameters (expressed in standard deviation scores) compared with low fit subjects (71 boys and 77 girls<20th percentile for cardiorespiratory fitness): elastic modulus -0.16±1.02 versus 0.19±1.17, p=0.009; stiffness index β -0.15±1.08 versus 0.16±1.1, p=0.03; and pulse wave velocity β -0.19±1.02 versus 0.19±1.14, p=0.005. Cardiorespiratory fitness was associated with healthier arteries in children and adolescents. Comparison of very fit with unfit subjects revealed better distensibility parameters in very fit boys and girls.
Relative Age Effect in Physical Fitness Among Elementary and Junior High School Students.
Nakata, Hiroki; Akido, Miki; Naruse, Kumi; Fujiwara, Motoko
2017-10-01
The present study investigated characteristics of the relative age effect (RAE) among a general sample of Japanese elementary and junior high school students. Japan applies a unique annual age-grouping by birthdates between April 1 and March 31 of the following year for sport and education. Anthropometric and physical fitness data were obtained from 3,610 Japanese students, including height, weight, the 50-m sprint, standing long jump, grip strength, bent-leg sit-ups, sit and reach, side steps, 20-m shuttle run, and ball throw. We examined RAE-related differences in these data using a one-way analysis of variance by comparing students with birthdates in the first (April-September) versus second (October-March of the following year) semesters. We observed a significant RAE for boys aged 7 to 15 years on both anthropometric and fitness data, but a significant RAE for girls was only evident for physical fitness tests among elementary school and not junior high school students. Thus, a significant RAE in anthropometry and physical fitness was evident in a general sample of school children, and there were RAE gender differences among adolescents.
Posterior Predictive Bayesian Phylogenetic Model Selection
Lewis, Paul O.; Xie, Wangang; Chen, Ming-Hui; Fan, Yu; Kuo, Lynn
2014-01-01
We present two distinctly different posterior predictive approaches to Bayesian phylogenetic model selection and illustrate these methods using examples from green algal protein-coding cpDNA sequences and flowering plant rDNA sequences. The Gelfand–Ghosh (GG) approach allows dissection of an overall measure of model fit into components due to posterior predictive variance (GGp) and goodness-of-fit (GGg), which distinguishes this method from the posterior predictive P-value approach. The conditional predictive ordinate (CPO) method provides a site-specific measure of model fit useful for exploratory analyses and can be combined over sites yielding the log pseudomarginal likelihood (LPML) which is useful as an overall measure of model fit. CPO provides a useful cross-validation approach that is computationally efficient, requiring only a sample from the posterior distribution (no additional simulation is required). Both GG and CPO add new perspectives to Bayesian phylogenetic model selection based on the predictive abilities of models and complement the perspective provided by the marginal likelihood (including Bayes Factor comparisons) based solely on the fit of competing models to observed data. [Bayesian; conditional predictive ordinate; CPO; L-measure; LPML; model selection; phylogenetics; posterior predictive.] PMID:24193892
Effective degrees of freedom: a flawed metaphor
Janson, Lucas; Fithian, William; Hastie, Trevor J.
2015-01-01
Summary To most applied statisticians, a fitting procedure’s degrees of freedom is synonymous with its model complexity, or its capacity for overfitting to data. In particular, it is often used to parameterize the bias-variance tradeoff in model selection. We argue that, on the contrary, model complexity and degrees of freedom may correspond very poorly. We exhibit and theoretically explore various fitting procedures for which degrees of freedom is not monotonic in the model complexity parameter, and can exceed the total dimension of the ambient space even in very simple settings. We show that the degrees of freedom for any non-convex projection method can be unbounded. PMID:26977114
Validity of VO(2 max) in predicting blood volume: implications for the effect of fitness on aging
NASA Technical Reports Server (NTRS)
Convertino, V. A.; Ludwig, D. A.
2000-01-01
A multiple regression model was constructed to investigate the premise that blood volume (BV) could be predicted using several anthropometric variables, age, and maximal oxygen uptake (VO(2 max)). To test this hypothesis, age, calculated body surface area (height/weight composite), percent body fat (hydrostatic weight), and VO(2 max) were regressed on to BV using data obtained from 66 normal healthy men. Results from the evaluation of the full model indicated that the most parsimonious result was obtained when age and VO(2 max) were regressed on BV expressed per kilogram body weight. The full model accounted for 52% of the total variance in BV per kilogram body weight. Both age and VO(2 max) were related to BV in the positive direction. Percent body fat contributed <1% to the explained variance in BV when expressed in absolute BV (ml) or as BV per kilogram body weight. When the model was cross validated on 41 new subjects and BV per kilogram body weight was reexpressed as raw BV, the results indicated that the statistical model would be stable under cross validation (e.g., predictive applications) with an accuracy of +/- 1,200 ml at 95% confidence. Our results support the hypothesis that BV is an increasing function of aerobic fitness and to a lesser extent the age of the subject. The results may have implication as to a mechanism by which aerobic fitness and activity may be protective against reduced BV associated with aging.
How multiple mating by females affects sexual selection
Shuster, Stephen M.; Briggs, William R.; Dennis, Patricia A.
2013-01-01
Multiple mating by females is widely thought to encourage post-mating sexual selection and enhance female fitness. We show that whether polyandrous mating has these effects depends on two conditions. Condition 1 is the pattern of sperm utilization by females; specifically, whether, among females, male mating number, m (i.e. the number of times a male mates with one or more females) covaries with male offspring number, o. Polyandrous mating enhances sexual selection only when males who are successful at multiple mating also sire most or all of each of their mates' offspring, i.e. only when Cov♂(m,o), is positive. Condition 2 is the pattern of female reproductive life-history; specifically, whether female mating number, m, covaries with female offspring number, o. Only semelparity does not erode sexual selection, whereas iteroparity (i.e. when Cov♀(m,o), is positive) always increases the variance in offspring numbers among females, which always decreases the intensity of sexual selection on males. To document the covariance between mating number and offspring number for each sex, it is necessary to assign progeny to all parents, as well as identify mating and non-mating individuals. To document significant fitness gains by females through iteroparity, it is necessary to determine the relative magnitudes of male as well as female contributions to the total variance in relative fitness. We show how such data can be collected, how often they are collected, and we explain the circumstances in which selection favouring multiple mating by females can be strong or weak. PMID:23339237
Lescroart, Mark D.; Stansbury, Dustin E.; Gallant, Jack L.
2015-01-01
Perception of natural visual scenes activates several functional areas in the human brain, including the Parahippocampal Place Area (PPA), Retrosplenial Complex (RSC), and the Occipital Place Area (OPA). It is currently unclear what specific scene-related features are represented in these areas. Previous studies have suggested that PPA, RSC, and/or OPA might represent at least three qualitatively different classes of features: (1) 2D features related to Fourier power; (2) 3D spatial features such as the distance to objects in a scene; or (3) abstract features such as the categories of objects in a scene. To determine which of these hypotheses best describes the visual representation in scene-selective areas, we applied voxel-wise modeling (VM) to BOLD fMRI responses elicited by a set of 1386 images of natural scenes. VM provides an efficient method for testing competing hypotheses by comparing predictions of brain activity based on encoding models that instantiate each hypothesis. Here we evaluated three different encoding models that instantiate each of the three hypotheses listed above. We used linear regression to fit each encoding model to the fMRI data recorded from each voxel, and we evaluated each fit model by estimating the amount of variance it predicted in a withheld portion of the data set. We found that voxel-wise models based on Fourier power or the subjective distance to objects in each scene predicted much of the variance predicted by a model based on object categories. Furthermore, the response variance explained by these three models is largely shared, and the individual models explain little unique variance in responses. Based on an evaluation of previous studies and the data we present here, we conclude that there is currently no good basis to favor any one of the three alternative hypotheses about visual representation in scene-selective areas. We offer suggestions for further studies that may help resolve this issue. PMID:26594164
Kim, Sun Jung; Yoo, Il Young
2016-03-01
The purpose of this study was to explain the health promotion behavior of Chinese international students in Korea using a structural equation model including acculturation factors. A survey using self-administered questionnaires was employed. Data were collected from 272 Chinese students who have resided in Korea for longer than 6 months. The data were analyzed using structural equation modeling. The p value of final model is .31. The fitness parameters of the final model such as goodness of fit index, adjusted goodness of fit index, normed fit index, non-normed fit index, and comparative fit index were more than .95. Root mean square of residual and root mean square error of approximation also met the criteria. Self-esteem, perceived health status, acculturative stress and acculturation level had direct effects on health promotion behavior of the participants and the model explained 30.0% of variance. The Chinese students in Korea with higher self-esteem, perceived health status, acculturation level, and lower acculturative stress reported higher health promotion behavior. The findings can be applied to develop health promotion strategies for this population. Copyright © 2016. Published by Elsevier B.V.
Is my study system good enough? A case study for identifying maternal effects.
Holand, Anna Marie; Steinsland, Ingelin
2016-06-01
In this paper, we demonstrate how simulation studies can be used to answer questions about identifiability and consequences of omitting effects from a model. The methodology is presented through a case study where identifiability of genetic and/or individual (environmental) maternal effects is explored. Our study system is a wild house sparrow ( Passer domesticus ) population with known pedigree. We fit pedigree-based (generalized) linear mixed models (animal models), with and without additive genetic and individual maternal effects, and use deviance information criterion (DIC) for choosing between these models. Pedigree and R-code for simulations are available. For this study system, the simulation studies show that only large maternal effects can be identified. The genetic maternal effect (and similar for individual maternal effect) has to be at least half of the total genetic variance to be identified. The consequences of omitting a maternal effect when it is present are explored. Our results indicate that the total (genetic and individual) variance are accounted for. When an individual (environmental) maternal effect is omitted from the model, this only influences the estimated (direct) individual (environmental) variance. When a genetic maternal effect is omitted from the model, both (direct) genetic and (direct) individual variance estimates are overestimated.
El control de las concentraciones empresariales en el sector electrico
NASA Astrophysics Data System (ADS)
Montoya Pardo, Milton Fernando
The rampant success of quantum theory is the result of applications of the 'new' quantum mechanics of Schrodinger and Heisenberg (1926-7), the Feynman-Schwinger-Tomonaga Quantum Electro-dynamics (1946-51), the electro-weak theory of Salaam, Weinberg, and Glashow (1967-9), and Quantum Chromodynamics (1973-); in fact, this success of 'the' quantum theory has depended on a continuous stream of brilliant and quite disparate mathematical formulations. In this carefully concealed ferment there lie plenty of unresolved difficulties, simply because in churning out fabulously accurate calculational tools there has been no sensible explanation of all that is going on. It is even argued that such an understanding is nothing to do with physics. A long-standing and famous illustration of this is the paradoxical thought-experiment of Einstein, Podolsky and Rosen (1935). Fundamental to all quantum theories, and also their paradoxes, is the location of sub-microscopic objects; or, rather, that the specification of such a location is fraught with mathematical inconsistency. This project encompasses a detailed, critical survey of the tangled history of Position within quantum theories. The first step is to show that, contrary to appearances, canonical quantum mechanics has only a vague notion of locality. After analysing a number of previous attempts at a 'relativistic quantum mechanics', two lines of thought are considered in detail. The first is the work of Wan and students, which is shown to be no real improvement on the iisu.al 'nonrelativistic' theory. The second is based on an idea of Dirac's - using backwards-in-time light-cones as the hypersurface in space-time. There remain considerable difficulties in the way of producing a consistent scheme here. To keep things nicely stirred up, the author then proposes his own approach - an adaptation of Feynman's QED propagators. This new approach is distinguished from Feynman's since the propagator or Green's function is not obtained by Feynman's rule. The type of equation solved is also different: instead of an initial-value problem, a solution that obeys a time-symmetric causality criterion is found for an inhomogeneous partial differential equation with homogeneous boundary conditions. To make the consideration of locality more precise, some results of Fourier transform theory are presented in a form that is directly applicable. Somewhat away from the main thrust of the thesis, there is also an attempt to explain, the manner in which quantum effects disappear as the number of particles increases in such things as experimental realisations of the EPR and de Broglie thought experiments.
Tectonica activa y geodinamica en el norte de centroamerica
NASA Astrophysics Data System (ADS)
Alvarez Gomez, Jose Antonio
The rampant success of quantum theory is the result of applications of the 'new' quantum mechanics of Schrodinger and Heisenberg (1926-7), the Feynman-Schwinger-Tomonaga Quantum Electro-dynamics (1946-51), the electro-weak theory of Salaam, Weinberg, and Glashow (1967-9), and Quantum Chromodynamics (1973-); in fact, this success of 'the' quantum theory has depended on a continuous stream of brilliant and quite disparate mathematical formulations. In this carefully concealed ferment there lie plenty of unresolved difficulties, simply because in churning out fabulously accurate calculational tools there has been no sensible explanation of all that is going on. It is even argued that such an understanding is nothing to do with physics. A long-standing and famous illustration of this is the paradoxical thought-experiment of Einstein, Podolsky and Rosen (1935). Fundamental to all quantum theories, and also their paradoxes, is the location of sub-microscopic objects; or, rather, that the specification of such a location is fraught with mathematical inconsistency. This project encompasses a detailed, critical survey of the tangled history of Position within quantum theories. The first step is to show that, contrary to appearances, canonical quantum mechanics has only a vague notion of locality. After analysing a number of previous attempts at a 'relativistic quantum mechanics', two lines of thought are considered in detail. The first is the work of Wan and students, which is shown to be no real improvement on the iisu.al 'nonrelativistic' theory. The second is based on an idea of Dirac's - using backwards-in-time light-cones as the hypersurface in space-time. There remain considerable difficulties in the way of producing a consistent scheme here. To keep things nicely stirred up, the author then proposes his own approach - an adaptation of Feynman's QED propagators. This new approach is distinguished from Feynman's since the propagator or Green's function is not obtained by Feynman's rule. The type of equation solved is also different: instead of an initial-value problem, a solution that obeys a time-symmetric causality criterion is found for an inhomogeneous partial differential equation with homogeneous boundary conditions. To make the consideration of locality more precise, some results of Fourier transform theory are presented in a form that is directly applicable. Somewhat away from the main thrust of the thesis, there is also an attempt to explain, the manner in which quantum effects disappear as the number of particles increases in such things as experimental realisations of the EPR and de Broglie thought experiments.
Estabilidad de ciertas ondas solitarias sometidas a perturbaciones estocasticas
NASA Astrophysics Data System (ADS)
Rodriguez Plaza, Maria Jesus
The rampant success of quantum theory is the result of applications of the 'new' quantum mechanics of Schrodinger and Heisenberg (1926-7), the Feynman-Schwinger-Tomonaga Quantum Electro-dynamics (1946-51), the electro-weak theory of Salaam, Weinberg, and Glashow (1967-9), and Quantum Chromodynamics (1973-); in fact, this success of 'the' quantum theory has depended on a continuous stream of brilliant and quite disparate mathematical formulations. In this carefully concealed ferment there lie plenty of unresolved difficulties, simply because in churning out fabulously accurate calculational tools there has been no sensible explanation of all that is going on. It is even argued that such an understanding is nothing to do with physics. A long-standing and famous illustration of this is the paradoxical thought-experiment of Einstein, Podolsky and Rosen (1935). Fundamental to all quantum theories, and also their paradoxes, is the location of sub-microscopic objects; or, rather, that the specification of such a location is fraught with mathematical inconsistency. This project encompasses a detailed, critical survey of the tangled history of Position within quantum theories. The first step is to show that, contrary to appearances, canonical quantum mechanics has only a vague notion of locality. After analysing a number of previous attempts at a 'relativistic quantum mechanics', two lines of thought are considered in detail. The first is the work of Wan and students, which is shown to be no real improvement on the iisu.al 'nonrelativistic' theory. The second is based on an idea of Dirac's - using backwards-in-time light-cones as the hypersurface in space-time. There remain considerable difficulties in the way of producing a consistent scheme here. To keep things nicely stirred up, the author then proposes his own approach - an adaptation of Feynman's QED propagators. This new approach is distinguished from Feynman's since the propagator or Green's function is not obtained by Feynman's rule. The type of equation solved is also different: instead of an initial-value problem, a solution that obeys a time-symmetric causality criterion is found for an inhomogeneous partial differential equation with homogeneous boundary conditions. To make the consideration of locality more precise, some results of Fourier transform theory are presented in a form that is directly applicable. Somewhat away from the main thrust of the thesis, there is also an attempt to explain, the manner in which quantum effects disappear as the number of particles increases in such things as experimental realisations of the EPR and de Broglie thought experiments.
The pursuit of locality in quantum mechanics
NASA Astrophysics Data System (ADS)
Hodkin, Malcolm
The rampant success of quantum theory is the result of applications of the 'new' quantum mechanics of Schrodinger and Heisenberg (1926-7), the Feynman-Schwinger-Tomonaga Quantum Electro-dynamics (1946-51), the electro-weak theory of Salaam, Weinberg, and Glashow (1967-9), and Quantum Chromodynamics (1973-); in fact, this success of 'the' quantum theory has depended on a continuous stream of brilliant and quite disparate mathematical formulations. In this carefully concealed ferment there lie plenty of unresolved difficulties, simply because in churning out fabulously accurate calculational tools there has been no sensible explanation of all that is going on. It is even argued that such an understanding is nothing to do with physics. A long-standing and famous illustration of this is the paradoxical thought-experiment of Einstein, Podolsky and Rosen (1935). Fundamental to all quantum theories, and also their paradoxes, is the location of sub-microscopic objects; or, rather, that the specification of such a location is fraught with mathematical inconsistency. This project encompasses a detailed, critical survey of the tangled history of Position within quantum theories. The first step is to show that, contrary to appearances, canonical quantum mechanics has only a vague notion of locality. After analysing a number of previous attempts at a 'relativistic quantum mechanics', two lines of thought are considered in detail. The first is the work of Wan and students, which is shown to be no real improvement on the iisu.al 'nonrelativistic' theory. The second is based on an idea of Dirac's - using backwards-in-time light-cones as the hypersurface in space-time. There remain considerable difficulties in the way of producing a consistent scheme here. To keep things nicely stirred up, the author then proposes his own approach - an adaptation of Feynman's QED propagators. This new approach is distinguished from Feynman's since the propagator or Green's function is not obtained by Feynman's rule. The type of equation solved is also different: instead of an initial-value problem, a solution that obeys a time-symmetric causality criterion is found for an inhomogeneous partial differential equation with homogeneous boundary conditions. To make the consideration of locality more precise, some results of Fourier transform theory are presented in a form that is directly applicable. Somewhat away from the main thrust of the thesis, there is also an attempt to explain, the manner in which quantum effects disappear as the number of particles increases in such things as experimental realisations of the EPR and de Broglie thought experiments.
NASA Astrophysics Data System (ADS)
Malpica Velasco, Jose Antonio
The rampant success of quantum theory is the result of applications of the 'new' quantum mechanics of Schrodinger and Heisenberg (1926-7), the Feynman-Schwinger-Tomonaga Quantum Electro-dynamics (1946-51), the electro-weak theory of Salaam, Weinberg, and Glashow (1967-9), and Quantum Chromodynamics (1973-); in fact, this success of 'the' quantum theory has depended on a continuous stream of brilliant and quite disparate mathematical formulations. In this carefully concealed ferment there lie plenty of unresolved difficulties, simply because in churning out fabulously accurate calculational tools there has been no sensible explanation of all that is going on. It is even argued that such an understanding is nothing to do with physics. A long-standing and famous illustration of this is the paradoxical thought-experiment of Einstein, Podolsky and Rosen (1935). Fundamental to all quantum theories, and also their paradoxes, is the location of sub-microscopic objects; or, rather, that the specification of such a location is fraught with mathematical inconsistency. This project encompasses a detailed, critical survey of the tangled history of Position within quantum theories. The first step is to show that, contrary to appearances, canonical quantum mechanics has only a vague notion of locality. After analysing a number of previous attempts at a 'relativistic quantum mechanics', two lines of thought are considered in detail. The first is the work of Wan and students, which is shown to be no real improvement on the iisu.al 'nonrelativistic' theory. The second is based on an idea of Dirac's - using backwards-in-time light-cones as the hypersurface in space-time. There remain considerable difficulties in the way of producing a consistent scheme here. To keep things nicely stirred up, the author then proposes his own approach - an adaptation of Feynman's QED propagators. This new approach is distinguished from Feynman's since the propagator or Green's function is not obtained by Feynman's rule. The type of equation solved is also different: instead of an initial-value problem, a solution that obeys a time-symmetric causality criterion is found for an inhomogeneous partial differential equation with homogeneous boundary conditions. To make the consideration of locality more precise, some results of Fourier transform theory are presented in a form that is directly applicable. Somewhat away from the main thrust of the thesis, there is also an attempt to explain, the manner in which quantum effects disappear as the number of particles increases in such things as experimental realisations of the EPR and de Broglie thought experiments.
Analisis espectroscopico de estrellas variables Delta Scuti
NASA Astrophysics Data System (ADS)
Solano Marquez, Enrique
The rampant success of quantum theory is the result of applications of the 'new' quantum mechanics of Schrodinger and Heisenberg (1926-7), the Feynman-Schwinger-Tomonaga Quantum Electro-dynamics (1946-51), the electro-weak theory of Salaam, Weinberg, and Glashow (1967-9), and Quantum Chromodynamics (1973-); in fact, this success of 'the' quantum theory has depended on a continuous stream of brilliant and quite disparate mathematical formulations. In this carefully concealed ferment there lie plenty of unresolved difficulties, simply because in churning out fabulously accurate calculational tools there has been no sensible explanation of all that is going on. It is even argued that such an understanding is nothing to do with physics. A long-standing and famous illustration of this is the paradoxical thought-experiment of Einstein, Podolsky and Rosen (1935). Fundamental to all quantum theories, and also their paradoxes, is the location of sub-microscopic objects; or, rather, that the specification of such a location is fraught with mathematical inconsistency. This project encompasses a detailed, critical survey of the tangled history of Position within quantum theories. The first step is to show that, contrary to appearances, canonical quantum mechanics has only a vague notion of locality. After analysing a number of previous attempts at a 'relativistic quantum mechanics', two lines of thought are considered in detail. The first is the work of Wan and students, which is shown to be no real improvement on the iisu.al 'nonrelativistic' theory. The second is based on an idea of Dirac's - using backwards-in-time light-cones as the hypersurface in space-time. There remain considerable difficulties in the way of producing a consistent scheme here. To keep things nicely stirred up, the author then proposes his own approach - an adaptation of Feynman's QED propagators. This new approach is distinguished from Feynman's since the propagator or Green's function is not obtained by Feynman's rule. The type of equation solved is also different: instead of an initial-value problem, a solution that obeys a time-symmetric causality criterion is found for an inhomogeneous partial differential equation with homogeneous boundary conditions. To make the consideration of locality more precise, some results of Fourier transform theory are presented in a form that is directly applicable. Somewhat away from the main thrust of the thesis, there is also an attempt to explain, the manner in which quantum effects disappear as the number of particles increases in such things as experimental realisations of the EPR and de Broglie thought experiments.
Inversion gravimetrica 3D por tecnicas de evolucion: Aplicacion a la Isla de Fuerteventura
NASA Astrophysics Data System (ADS)
Gonzalez Montesinos, Fuensanta
The rampant success of quantum theory is the result of applications of the 'new' quantum mechanics of Schrodinger and Heisenberg (1926-7), the Feynman-Schwinger-Tomonaga Quantum Electro-dynamics (1946-51), the electro-weak theory of Salaam, Weinberg, and Glashow (1967-9), and Quantum Chromodynamics (1973-); in fact, this success of 'the' quantum theory has depended on a continuous stream of brilliant and quite disparate mathematical formulations. In this carefully concealed ferment there lie plenty of unresolved difficulties, simply because in churning out fabulously accurate calculational tools there has been no sensible explanation of all that is going on. It is even argued that such an understanding is nothing to do with physics. A long-standing and famous illustration of this is the paradoxical thought-experiment of Einstein, Podolsky and Rosen (1935). Fundamental to all quantum theories, and also their paradoxes, is the location of sub-microscopic objects; or, rather, that the specification of such a location is fraught with mathematical inconsistency. This project encompasses a detailed, critical survey of the tangled history of Position within quantum theories. The first step is to show that, contrary to appearances, canonical quantum mechanics has only a vague notion of locality. After analysing a number of previous attempts at a 'relativistic quantum mechanics', two lines of thought are considered in detail. The first is the work of Wan and students, which is shown to be no real improvement on the iisu.al 'nonrelativistic' theory. The second is based on an idea of Dirac's - using backwards-in-time light-cones as the hypersurface in space-time. There remain considerable difficulties in the way of producing a consistent scheme here. To keep things nicely stirred up, the author then proposes his own approach - an adaptation of Feynman's QED propagators. This new approach is distinguished from Feynman's since the propagator or Green's function is not obtained by Feynman's rule. The type of equation solved is also different: instead of an initial-value problem, a solution that obeys a time-symmetric causality criterion is found for an inhomogeneous partial differential equation with homogeneous boundary conditions. To make the consideration of locality more precise, some results of Fourier transform theory are presented in a form that is directly applicable. Somewhat away from the main thrust of the thesis, there is also an attempt to explain, the manner in which quantum effects disappear as the number of particles increases in such things as experimental realisations of the EPR and de Broglie thought experiments.
Evolution tectonothermale du massif Hercynien des Rehamna (zone centre-mesetienne, Maroc)
NASA Astrophysics Data System (ADS)
Aghzer, Abdel Mouhsine
The rampant success of quantum theory is the result of applications of the 'new' quantum mechanics of Schrodinger and Heisenberg (1926-7), the Feynman-Schwinger-Tomonaga Quantum Electro-dynamics (1946-51), the electro-weak theory of Salaam, Weinberg, and Glashow (1967-9), and Quantum Chromodynamics (1973-); in fact, this success of 'the' quantum theory has depended on a continuous stream of brilliant and quite disparate mathematical formulations. In this carefully concealed ferment there lie plenty of unresolved difficulties, simply because in churning out fabulously accurate calculational tools there has been no sensible explanation of all that is going on. It is even argued that such an understanding is nothing to do with physics. A long-standing and famous illustration of this is the paradoxical thought-experiment of Einstein, Podolsky and Rosen (1935). Fundamental to all quantum theories, and also their paradoxes, is the location of sub-microscopic objects; or, rather, that the specification of such a location is fraught with mathematical inconsistency. This project encompasses a detailed, critical survey of the tangled history of Position within quantum theories. The first step is to show that, contrary to appearances, canonical quantum mechanics has only a vague notion of locality. After analysing a number of previous attempts at a 'relativistic quantum mechanics', two lines of thought are considered in detail. The first is the work of Wan and students, which is shown to be no real improvement on the iisu.al 'nonrelativistic' theory. The second is based on an idea of Dirac's - using backwards-in-time light-cones as the hypersurface in space-time. There remain considerable difficulties in the way of producing a consistent scheme here. To keep things nicely stirred up, the author then proposes his own approach - an adaptation of Feynman's QED propagators. This new approach is distinguished from Feynman's since the propagator or Green's function is not obtained by Feynman's rule. The type of equation solved is also different: instead of an initial-value problem, a solution that obeys a time-symmetric causality criterion is found for an inhomogeneous partial differential equation with homogeneous boundary conditions. To make the consideration of locality more precise, some results of Fourier transform theory are presented in a form that is directly applicable. Somewhat away from the main thrust of the thesis, there is also an attempt to explain, the manner in which quantum effects disappear as the number of particles increases in such things as experimental realisations of the EPR and de Broglie thought experiments.
NASA Astrophysics Data System (ADS)
Bejar Pizarro, Marta
The rampant success of quantum theory is the result of applications of the 'new' quantum mechanics of Schrodinger and Heisenberg (1926-7), the Feynman-Schwinger-Tomonaga Quantum Electro-dynamics (1946-51), the electro-weak theory of Salaam, Weinberg, and Glashow (1967-9), and Quantum Chromodynamics (1973-); in fact, this success of 'the' quantum theory has depended on a continuous stream of brilliant and quite disparate mathematical formulations. In this carefully concealed ferment there lie plenty of unresolved difficulties, simply because in churning out fabulously accurate calculational tools there has been no sensible explanation of all that is going on. It is even argued that such an understanding is nothing to do with physics. A long-standing and famous illustration of this is the paradoxical thought-experiment of Einstein, Podolsky and Rosen (1935). Fundamental to all quantum theories, and also their paradoxes, is the location of sub-microscopic objects; or, rather, that the specification of such a location is fraught with mathematical inconsistency. This project encompasses a detailed, critical survey of the tangled history of Position within quantum theories. The first step is to show that, contrary to appearances, canonical quantum mechanics has only a vague notion of locality. After analysing a number of previous attempts at a 'relativistic quantum mechanics', two lines of thought are considered in detail. The first is the work of Wan and students, which is shown to be no real improvement on the iisu.al 'nonrelativistic' theory. The second is based on an idea of Dirac's - using backwards-in-time light-cones as the hypersurface in space-time. There remain considerable difficulties in the way of producing a consistent scheme here. To keep things nicely stirred up, the author then proposes his own approach - an adaptation of Feynman's QED propagators. This new approach is distinguished from Feynman's since the propagator or Green's function is not obtained by Feynman's rule. The type of equation solved is also different: instead of an initial-value problem, a solution that obeys a time-symmetric causality criterion is found for an inhomogeneous partial differential equation with homogeneous boundary conditions. To make the consideration of locality more precise, some results of Fourier transform theory are presented in a form that is directly applicable. Somewhat away from the main thrust of the thesis, there is also an attempt to explain, the manner in which quantum effects disappear as the number of particles increases in such things as experimental realisations of the EPR and de Broglie thought experiments.
NASA Astrophysics Data System (ADS)
Fillali, Laila
The rampant success of quantum theory is the result of applications of the 'new' quantum mechanics of Schrodinger and Heisenberg (1926-7), the Feynman-Schwinger-Tomonaga Quantum Electro-dynamics (1946-51), the electro-weak theory of Salaam, Weinberg, and Glashow (1967-9), and Quantum Chromodynamics (1973-); in fact, this success of 'the' quantum theory has depended on a continuous stream of brilliant and quite disparate mathematical formulations. In this carefully concealed ferment there lie plenty of unresolved difficulties, simply because in churning out fabulously accurate calculational tools there has been no sensible explanation of all that is going on. It is even argued that such an understanding is nothing to do with physics. A long-standing and famous illustration of this is the paradoxical thought-experiment of Einstein, Podolsky and Rosen (1935). Fundamental to all quantum theories, and also their paradoxes, is the location of sub-microscopic objects; or, rather, that the specification of such a location is fraught with mathematical inconsistency. This project encompasses a detailed, critical survey of the tangled history of Position within quantum theories. The first step is to show that, contrary to appearances, canonical quantum mechanics has only a vague notion of locality. After analysing a number of previous attempts at a 'relativistic quantum mechanics', two lines of thought are considered in detail. The first is the work of Wan and students, which is shown to be no real improvement on the iisu.al 'nonrelativistic' theory. The second is based on an idea of Dirac's - using backwards-in-time light-cones as the hypersurface in space-time. There remain considerable difficulties in the way of producing a consistent scheme here. To keep things nicely stirred up, the author then proposes his own approach - an adaptation of Feynman's QED propagators. This new approach is distinguished from Feynman's since the propagator or Green's function is not obtained by Feynman's rule. The type of equation solved is also different: instead of an initial-value problem, a solution that obeys a time-symmetric causality criterion is found for an inhomogeneous partial differential equation with homogeneous boundary conditions. To make the consideration of locality more precise, some results of Fourier transform theory are presented in a form that is directly applicable. Somewhat away from the main thrust of the thesis, there is also an attempt to explain, the manner in which quantum effects disappear as the number of particles increases in such things as experimental realisations of the EPR and de Broglie thought experiments.
[How to fit and interpret multilevel models using SPSS].
Pardo, Antonio; Ruiz, Miguel A; San Martín, Rafael
2007-05-01
Hierarchic or multilevel models are used to analyse data when cases belong to known groups and sample units are selected both from the individual level and from the group level. In this work, the multilevel models most commonly discussed in the statistic literature are described, explaining how to fit these models using the SPSS program (any version as of the 11 th ) and how to interpret the outcomes of the analysis. Five particular models are described, fitted, and interpreted: (1) one-way analysis of variance with random effects, (2) regression analysis with means-as-outcomes, (3) one-way analysis of covariance with random effects, (4) regression analysis with random coefficients, and (5) regression analysis with means- and slopes-as-outcomes. All models are explained, trying to make them understandable to researchers in health and behaviour sciences.
Nishith, Pallavi; Resick, Patricia A.; Griffin, Michael G.
2010-01-01
Curve estimation techniques were used to identify the pattern of therapeutic change in female rape victims with posttraumatic stress disorder (PTSD). Within-session data on the Posttraumatic Stress Disorder Symptom Scale were obtained, in alternate therapy sessions, on 171 women. The final sample of treatment completers included 54 prolonged exposure (PE) and 54 cognitive-processing therapy (CPT) completers. For both PE and CPT, a quadratic function provided the best fit for the total PTSD, reexperiencing, and arousal scores. However, a difference in the line of best fit was observed for the avoidance symptoms. Although a quadratic function still provided a better fit for the PE avoidance, a linear function was more parsimonious in explaining the CPT avoidance variance. Implications of the findings are discussed. PMID:12182271
Fit Analysis of Different Framework Fabrication Techniques for Implant-Supported Partial Prostheses.
Spazzin, Aloísio Oro; Bacchi, Atais; Trevisani, Alexandre; Farina, Ana Paula; Dos Santos, Mateus Bertolini
2016-01-01
This study evaluated the vertical misfit of implant-supported frameworks made using different techniques to obtain passive fit. Thirty three-unit fixed partial dentures were fabricated in cobalt-chromium alloy (n = 10) using three fabrication methods: one-piece casting, framework cemented on prepared abutments, and laser welding. The vertical misfit between the frameworks and the abutments was evaluated with an optical microscope using the single-screw test. Data were analyzed using one-way analysis of variance and Tukey test (α = .05). The one-piece casted frameworks presented significantly higher vertical misfit values than those found for framework cemented on prepared abutments and laser welding techniques (P < .001 and P < .003, respectively). Laser welding and framework cemented on prepared abutments are effective techniques to improve the adaptation of three-unit implant-supported prostheses. These techniques presented similar fit.
NASA Technical Reports Server (NTRS)
Amling, G. E.; Holms, A. G.
1973-01-01
A computer program is described that performs a statistical multiple-decision procedure called chain pooling. It uses a number of mean squares assigned to error variance that is conditioned on the relative magnitudes of the mean squares. The model selection is done according to user-specified levels of type 1 or type 2 error probabilities.
ERIC Educational Resources Information Center
Ottley, Jennifer Riggie; Ferron, John M.; Hanline, Mary Frances
2016-01-01
The purpose of this study was to explain the variability in data collected from a single-case design study and to identify predictors of communicative outcomes for children with developmental delays or disabilities (n = 4). Using SAS® University Edition, we fit multilevel models with time nested within children. Children's level of baseline…
ERIC Educational Resources Information Center
Jackson, Dan
2013-01-01
Statistical inference is problematic in the common situation in meta-analysis where the random effects model is fitted to just a handful of studies. In particular, the asymptotic theory of maximum likelihood provides a poor approximation, and Bayesian methods are sensitive to the prior specification. Hence, less efficient, but easily computed and…
A Case for Transforming the Criterion of a Predictive Validity Study
ERIC Educational Resources Information Center
Patterson, Brian F.; Kobrin, Jennifer L.
2011-01-01
This study presents a case for applying a transformation (Box and Cox, 1964) of the criterion used in predictive validity studies. The goals of the transformation were to better meet the assumptions of the linear regression model and to reduce the residual variance of fitted (i.e., predicted) values. Using data for the 2008 cohort of first-time,…
The Janus face of Darwinian competition
Hintze, Arend; Phillips, Nathaniel; Hertwig, Ralph
2015-01-01
Without competition, organisms would not evolve any meaningful physical or cognitive abilities. Competition can thus be understood as the driving force behind Darwinian evolution. But does this imply that more competitive environments necessarily evolve organisms with more sophisticated cognitive abilities than do less competitive environments? Or is there a tipping point at which competition does more harm than good? We examine the evolution of decision strategies among virtual agents performing a repetitive sampling task in three distinct environments. The environments differ in the degree to which the actions of a competitor can affect the fitness of the sampling agent, and in the variance of the sample. Under weak competition, agents evolve decision strategies that sample often and make accurate decisions, which not only improve their own fitness, but are good for the entire population. Under extreme competition, however, the dark side of the Janus face of Darwinian competition emerges: Agents are forced to sacrifice accuracy for speed and are prevented from sampling as often as higher variance in the environment would require. Modest competition is therefore a good driver for the evolution of cognitive abilities and of the population as a whole, whereas too much competition is devastating. PMID:26354182
The Janus face of Darwinian competition.
Hintze, Arend; Phillips, Nathaniel; Hertwig, Ralph
2015-09-10
Without competition, organisms would not evolve any meaningful physical or cognitive abilities. Competition can thus be understood as the driving force behind Darwinian evolution. But does this imply that more competitive environments necessarily evolve organisms with more sophisticated cognitive abilities than do less competitive environments? Or is there a tipping point at which competition does more harm than good? We examine the evolution of decision strategies among virtual agents performing a repetitive sampling task in three distinct environments. The environments differ in the degree to which the actions of a competitor can affect the fitness of the sampling agent, and in the variance of the sample. Under weak competition, agents evolve decision strategies that sample often and make accurate decisions, which not only improve their own fitness, but are good for the entire population. Under extreme competition, however, the dark side of the Janus face of Darwinian competition emerges: Agents are forced to sacrifice accuracy for speed and are prevented from sampling as often as higher variance in the environment would require. Modest competition is therefore a good driver for the evolution of cognitive abilities and of the population as a whole, whereas too much competition is devastating.
Italian version of the Task and Ego Orientation in physical education questionnaire.
Bortoli, Laura; Robazza, Claudio
2005-12-01
The 1992 Task and Ego Orientation in Sport Questionnaire developed by Duda and Nicholls was modified by Walling and Duda in 1995 to assess task and ego orientation in physical education. The modified version was translated into Italian and administered to 1,547 students, 786 girls and 761 boys ages 14 to 19 years, to examine the factor structure. To evaluate the goodness of fit of the expected two-factor solution as in the original questionnaire, confirmatory factor analysis was conducted on four samples of boys and girls of two classes of age (14-16 and 17-19 years). Across sex and age, chi-squared/df ratios were less than 5.0, fit indices (GFI, NNFI, and CFI) not less than .90, and root mean square error of approximation (RMSEA) below .10. Thus, the two-factor solution of the questionnaire was supported. In the total sample, the two scales showed good internal consistency, with Cronbach alpha values of .92 for the Ego factor and .83 for the Task factor. The Ego factor accounted for 34.1% of variance and the Task factor accounted for 21.0% of variance.
Partitioning degrees of freedom in hierarchical and other richly-parameterized models.
Cui, Yue; Hodges, James S; Kong, Xiaoxiao; Carlin, Bradley P
2010-02-01
Hodges & Sargent (2001) developed a measure of a hierarchical model's complexity, degrees of freedom (DF), that is consistent with definitions for scatterplot smoothers, interpretable in terms of simple models, and that enables control of a fit's complexity by means of a prior distribution on complexity. DF describes complexity of the whole fitted model but in general it is unclear how to allocate DF to individual effects. We give a new definition of DF for arbitrary normal-error linear hierarchical models, consistent with Hodges & Sargent's, that naturally partitions the n observations into DF for individual effects and for error. The new conception of an effect's DF is the ratio of the effect's modeled variance matrix to the total variance matrix. This gives a way to describe the sizes of different parts of a model (e.g., spatial clustering vs. heterogeneity), to place DF-based priors on smoothing parameters, and to describe how a smoothed effect competes with other effects. It also avoids difficulties with the most common definition of DF for residuals. We conclude by comparing DF to the effective number of parameters p(D) of Spiegelhalter et al (2002). Technical appendices and a dataset are available online as supplemental materials.
How neglect and punitiveness influence emotion knowledge.
Sullivan, Margaret Wolan; Carmody, Dennis P; Lewis, Michael
2010-06-01
To explore whether punitive parenting styles contribute to early-acquired emotion knowledge deficits observable in neglected children, we observed 42 preschool children's emotion knowledge, expression recognition time, and IQ. The children's mothers completed the Parent-Child Conflict Tactics Scales to assess the recent use of three types of discipline strategies (nonviolent, physically punitive, and psychological aggression), as well as neglectful parenting. Fifteen of the children were identified as neglected by Child Protective Services (CPS) reports; 27 children had no record of CPS involvement and served as the comparison group. There were no differences between the neglect and comparison groups in the demographic factors of gender, age, home language, minority status, or public assistance, nor on IQ. Hierarchical multiple regression modeling showed that neglect significantly predicted emotion knowledge. The addition of IQ contributed a significant amount of additional variance to the model and maintained the fit. Adding parental punitiveness in the final stage contributed little additional variance and did not significantly improve the fit. Thus, deficits in children's emotion knowledge may be due primarily to lower IQ or neglect. IQ was unrelated to speed of emotion recognition. Punitiveness did not directly contribute to emotion knowledge deficits but appeared in exploratory analysis to be related to speed of emotion recognition.
Mamen, Asgeir; Fredriksen, Per Morten
2018-05-01
As children's fitness continues to decline, frequent and systematic monitoring of fitness is important. Easy-to-use and low-cost methods with acceptable accuracy are essential in screening situations. This study aimed to investigate how the measurements of body mass index (BMI), waist circumference (WC) and waist-to-height ratio (WHtR) relate to selected measurements of fitness in children. A total of 1731 children from grades 1 to 6 were selected who had a complete set of height, body mass, running performance, handgrip strength and muscle mass measurements. A composite fitness score was established from the sum of sex- and age-specific z-scores for the variables running performance, handgrip strength and muscle mass. This fitness z-score was compared to z-scores and quartiles of BMI, WC and WHtR using analysis of variance, linear regression and receiver operator characteristic analysis. The regression analysis showed that z-scores for BMI, WC and WHtR all were linearly related to the composite fitness score, with WHtR having the highest R 2 at 0.80. The correct classification of fit and unfit was relatively high for all three measurements. WHtR had the best prediction of fitness of the three with an area under the curve of 0.92 ( p < 0.001). BMI, WC and WHtR were all found to be feasible measurements, but WHtR had a higher precision in its classification into fit and unfit in this population.
Veale, Jaimie F
2016-04-01
Recalled childhood gender role/identity is a construct that is related to sexual orientation, abuse, and psychological health. The purpose of this study was to assess the factorial validity of a short version of Zucker et al.'s (2006) "Recalled Childhood Gender Identity/Gender Role Questionnaire" using confirmatory factor analysis and to test the stability of the factor structure across groups (measurement invariance). Six items of the questionnaire were completed online by 1929 participants from a variety of gender identity and sexual orientation groups. Models of the six items loading onto one factor had poor fit for the data. Items were removed for having a large proportion of error variance. Among birth-assigned females, a five-item model had good fit for the data, but there was evidence for differences in scale's factor structure across gender identity, age, level of education, and country groups. Among birth-assigned males, the resulting four-item model did not account for all of the relationship between variables, and modeling for this resulted in a model that was almost saturated. This model also had evidence of measurement variance across gender identity and sexual orientation groups. The models had good reliability and factor score determinacy. These findings suggest that results of previous studies that have assessed recalled childhood gender role/identity may have been susceptible to construct bias due to measurement variance across these groups. Future studies should assess measurement invariance between groups they are comparing, and if it is not found the issue can be addressed by removing variant indicators and/or applying a partial invariance model.
NASA Astrophysics Data System (ADS)
Kuai, Zi-Xiang; Liu, Wan-Yu; Zhu, Yue-Min
2017-11-01
The aim of this work was to investigate the effect of multiple perfusion components on the pseudo-diffusion coefficient D * in the bi-exponential intravoxel incoherent motion (IVIM) model. Simulations were first performed to examine how the presence of multiple perfusion components influences D *. The real data of livers (n = 31), spleens (n = 31) and kidneys (n = 31) of 31 volunteers was then acquired using DWI for in vivo study and the number of perfusion components in these tissues was determined together with their perfusion fraction and D *, using an adaptive multi-exponential IVIM model. Finally, the bi-exponential model was applied to the real data and the mean, standard variance and coefficient of variation of D * as well as the fitting residual were calculated over the 31 volunteers for each of the three tissues and compared between them. The results of both the simulations and the in vivo study showed that, for the bi-exponential IVIM model, both the variance of D * and the fitting residual tended to increase when the number of perfusion components was increased or when the difference between perfusion components became large. In addition, it was found that the kidney presented the fewest perfusion components among the three tissues. The present study demonstrated that multi-component perfusion is a main factor that causes high variance of D * and the bi-exponential model should be used only when the tissues under investigation have few perfusion components, for example the kidney.
An alternative to the breeder's and Lande's equations.
Houchmandzadeh, Bahram
2014-01-10
The breeder's equation is a cornerstone of quantitative genetics, widely used in evolutionary modeling. Noting the mean phenotype in parental, selected parents, and the progeny by E(Z0), E(ZW), and E(Z1), this equation relates response to selection R = E(Z1) - E(Z0) to the selection differential S = E(ZW) - E(Z0) through a simple proportionality relation R = h(2)S, where the heritability coefficient h(2) is a simple function of genotype and environment factors variance. The validity of this relation relies strongly on the normal (Gaussian) distribution of the parent genotype, which is an unobservable quantity and cannot be ascertained. In contrast, we show here that if the fitness (or selection) function is Gaussian with mean μ, an alternative, exact linear equation of the form R' = j(2)S' can be derived, regardless of the parental genotype distribution. Here R' = E(Z1) - μ and S' = E(ZW) - μ stand for the mean phenotypic lag with respect to the mean of the fitness function in the offspring and selected populations. The proportionality coefficient j(2) is a simple function of selection function and environment factors variance, but does not contain the genotype variance. To demonstrate this, we derive the exact functional relation between the mean phenotype in the selected and the offspring population and deduce all cases that lead to a linear relation between them. These results generalize naturally to the concept of G matrix and the multivariate Lande's equation Δ(z) = GP(-1)S. The linearity coefficient of the alternative equation are not changed by Gaussian selection.
Burns, Melissa K; Andeway, Kathleen; Eppenstein, Paula; Ruroede, Kathleen
2014-06-01
This study was designed to establish balance parameters for the Nintendo(®) (Redmond, WA) "Wii Fit™" Balance Board system with three common games, in a sample of healthy adults, and to evaluate the balance measurement reproducibility with separation by age. This was a prospective, multivariate analysis of variance, cohort study design. Seventy-five participants who satisfied all inclusion criteria and completed an informed consent were enrolled. Participants were grouped into age ranges: 21-35 years (n=24), 36-50 years (n=24), and 51-65 years (n=27). Each participant completed the following games three consecutive times, in a randomized order, during one session: "Balance Bubble" (BB) for distance and duration, "Tight Rope" (TR) for distance and duration, and "Center of Balance" (COB) on the left and right sides. COB distributed weight was fairly symmetrical across all subjects and trials; therefore, no influence was assumed on or interaction with other "Wii Fit" measurements. Homogeneity of variance statistics indicated the assumption of distribution normality of the dependent variables (rates) were tenable. The multivariate analysis of variance included dependent variables BB and TR rates (distance divided by duration to complete) with age group and trials as the independent variables. The BB rate was statistically significant (F=4.725, P<0.005), but not the TR rate. The youngest group's BB rate was significantly larger than those of the other two groups. "Wii Fit" can discriminate among age groups across trials. The results show promise as a viable tool to measure balance and distance across time (speed) and center of balance distribution.
Tay, Cheryl Sihui; Sterzing, Thorsten; Lim, Chen Yen; Ding, Rui; Kong, Pui Wah
2017-05-01
This study examined (a) the strength of four individual footwear perception factors to influence the overall preference of running shoes and (b) whether these perception factors satisfied the nonmulticollinear assumption in a regression model. Running footwear must fulfill multiple functional criteria to satisfy its potential users. Footwear perception factors, such as fit and cushioning, are commonly used to guide shoe design and development, but it is unclear whether running-footwear users are able to differentiate one factor from another. One hundred casual runners assessed four running shoes on a 15-cm visual analogue scale for four footwear perception factors (fit, cushioning, arch support, and stability) as well as for overall preference during a treadmill running protocol. Diagnostic tests showed an absence of multicollinearity between factors, where values for tolerance ranged from .36 to .72, corresponding to variance inflation factors of 2.8 to 1.4. The multiple regression model of these four footwear perception variables accounted for 77.7% to 81.6% of variance in overall preference, with each factor explaining a unique part of the total variance. Casual runners were able to rate each footwear perception factor separately, thus assigning each factor a true potential to improve overall preference for the users. The results also support the use of a multiple regression model of footwear perception factors to predict overall running shoe preference. Regression modeling is a useful tool for running-shoe manufacturers to more precisely evaluate how individual factors contribute to the subjective assessment of running footwear.
NASA Technical Reports Server (NTRS)
Clancy, R. T.; Lee, S. W.; Gladstone, G. R.; McMillan, W. W.; Rousch, T.
1995-01-01
We propose key modifications to the Toon et al. (1977) model of the particle size distribution and composition of Mars atmospheric dust, based on a variety of spacecraft and wavelength observations of the dust. A much broader (r(sub eff)variance-0.8 micron), smaller particle size (r(sub mode)-0.02 microns) distribution coupled with a "palagonite-like" composition is argued to fit the complete ultraviolet-to-30-micron absorption properties of the dust better than the montmorillonite-basalt r(sub eff)variance= 0.4 micron, r(sub mode)= 0.40 micron dust model of Toon et al. Mariner 9 (infrared interferometer spectrometer) IRIS spectra of high atmospheric dust opacities during the 1971 - 1972 Mars global dust storm are analyzed in terms of the Toon et al. dust model, and a Hawaiian palagonite sample with two different size distribution models incorporating smaller dust particle sizes. Viking Infrared Thermal Mapper (IRTM) emission-phase-function (EPF) observations at 9 microns are analyzed to retrieve 9-micron dust opacities coincident with solar band dust opacities obtained from the same EPF sequences. These EPF dust opacities provide an independent measurement of the visible/9-microns extinction opacity ratio (> or equal to 2) for Mars atmospheric dust, which is consistent with a previous measurement by Martin (1986). Model values for the visible/9-microns opacity ratio and the ultraviolet and visible single-scattering albedos are calculated for the palagonite model with the smaller particle size distributions and compared to the same properties for the Toon et al. model of dust. The montmorillonite model of the dust is found to fit the detailed shape of the dust 9-micron absorption well. However, it predicts structured, deep absorptions at 20 microns which are not observed and requires a separate ultraviolet-visible absorbing component to match the observed behavior of the dust in this wavelength region. The modeled palagonite does not match the 8- to 9-micron absorption presented by the dust in the IRIS spectra, probably due to its low SiO2 content (31%). However, it does provide consistent levels of ultraviolet/visible absorption, 9- to 12-micron absorption, and a lack of structured absorption at 20 microns. The ratios of dust extinction opacities at visible, 9 microns, and 30 microns are strongly affected by the dust particle size distribution. The Toon et al. dust size distribution (r(sub mode)= 0.40, r(sub eff)variance= 0.4 microns, r(sub cw mu)= 2.7 microns) predicts the correct ratio of the 9- to 30-micron opacity, but underpredicts the visible/9-micron opacity ratio considerably (1 versus > or equal to 2). A similar particle distribution width with smaller particle sizes (r(sub mode)= 0.17, r(sub eff)variance= 0.4 microns, r(sub cw mu)=1.2 microns) will fit the observed visible/9-micron opacity ratio, but overpredicts the observed 9-micron/30-micron opacity ratio. A smaller and much broader particle size distribution (r(sub mode)= 0.02, r(sub eff)variance= 0.8 microns, r(sub cw mu)= 1.8 microns) can fit both dust opacity ratios. Overall, the nanocrystalline structure of palagonite coupled with a smaller, broader distribution of dust particle sizes provides a more consistent fit than the Toon et al. model of the dust to the IRIS spectra, the observed visible/9-micron dust opacity ratio, the Phobos occultation measurements of dust particle sizes, and the weakness of surface near IR absorptions expected for clay minerals.
NASA Technical Reports Server (NTRS)
Clancy, R. T.; Lee, S. W.; Gladstone, G. R.; Mcmillan, W. W.; Rousch, T.
1995-01-01
We propose key modifications to the Toon et al. (1977) model of the particle size distribution and composition of Mars atmospheric dust, based on a variety of spacecraft and wavelength observations of the dust. A much broader (r(sub eff) variance approximately 0.8 micrometers), smaller particle size (r(sub mode) approximately 0.02 micrometers) distribution coupled with a 'palagonite-like' composition is argued to fit the complete ultraviolet-to-30-micrometer absorption properties of the dust better than the montmorillonite-basalt, r(sub eff) variance = 0.4 micrometers, r(sub mode) = 0.40 dust model of Toon et al. Mariner 9 (infrared interferometer spectrometer) IRIS spectra of high atmospheric dust opacities during the 1971-1972 Mars global dust storm are analyzed in terms of the Toon et al. dust model, and a Hawaiian palagonite sample (Rousch et al., 1991) with two different size distribution models incorporating smaller dust particle sizes. Viking Infrared Thermal Mapper (IRTM) emmission-phase-function (EPF) observations at 9 micrometers are analyzed to retrieve 9-micrometer dust opacities coincident with solar band dust opacities obtained from the same EPF sequences (Clancy and Lee, 1991). These EPF dust opacities provide an independent measurement of the visible/9-micrometer extinction opacity ratio (greater than or = 2) for Mars atmospheric dust, which is consistent with a previous measurement by Martin (1986). Model values for the visible/9-micrometer opacity ratio and the ultraviolet and visible single-scattering albedos are calculated for the palagonite model with the smaller particle size distributions compared to the same properties for the Toon et al. model of dust. The montmorillonite model of the dust is found to fit the detailed shape of the dust 9-micrometer absorption well. However, it predicts structured, deep aborptions at 20 micrometers which are not observed and requires a separate ultraviolet-visible absorbing component to match the observed behavior of the dust in this wavelength region. The modeled palagonite does not match the 8-to 9-micrometer absorption presented by the dust in the IRIS spectra, probably due to its low SiO2 content (31%). However, it does provide consistent levels of ultraviolet/visible absorption, 9-to 12-micrometer absorption, and a lack of structured absorption at 20 micrometers. The ratios of dust extinction opacities at visible, 9 micrometers, and 30 micrometers are strongly affected by the dust particle size distribution. The Toon et al. dust size distribution (r(sub mode) = 0.40,r(sub eff) variance = 0.4 micrometers, r(sub cwmu) = 2.7 micrometers) predicts the correct ratio of the 9- to 30-micrometer opacity, but underpredicts the visible/9-micrometer opacity ratio considerably (1 versus greater than or = 2). A similar particle distribution width with smaller particle sizes (r(sub mode) = 0.17, r(sub eff) variance = 0.4 micrometers, r(sub cwmu) = 1.2 micrometers) will fit the observed visible/9-micrometer opacity ratio, but overpredicts the observed 9-micrometer/30-micrometer opacity ratio. A smaller and much broader particle size distribution (r(sub mode) = 0.002, r(sub eff) variance = 0.8 micrometers, r(sub cwmu) = 1.8 micrometers) can fit both dust opacity ratios. Overall, the nanocrystalline structure of palagonite coupled with a smaller, broader distribution of dust particle sizes provides a more consistent fit than the Toon et al. model of the dust to the IRIS spectra, the observed visible/9-micrometer dust opacity ratio, the Phobos occulation measurements of the dust particle sizes (Chassefiere et al., 1992), and the weakness of surface near IR absorptions expected for clay minerals (Clark, 1992; Bell and Crisp, 1993).
Nanotechnology: The Incredible Invisible World
ERIC Educational Resources Information Center
Roberts, Amanda S.
2011-01-01
The concept of nanotechnology was first introduced in 1959 by Richard Feynman at a meeting of the American Physical Society. Nanotechnology opens the door to an exciting new science/technology/engineering field. The possibilities for the uses of this technology should inspire the imagination to think big. Many are already pursuing such feats…
Laboratory for Computer Science Progress Report 18, July 1980-June 1981,
1983-04-01
group in collaboration with Rolf Landauer of IBM Research. Some of the most conspicuous participants: Dyson, Feynman, Wheeler Landauer, Keyes, Bennett...Sheldon A. Data Model Equivalence, December 1978, AD A062-753 TM-119 Shamir, Adi and Richard E. Zippel On the Security of the Merkle -Hellman
Perturbative test of exact vacuum expectation values of local fields in affine Toda theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, Changrim; Baseilhac, P.; Kim, Chanju
Vacuum expectation values of local fields for all dual pairs of nonsimply laced affine Toda field theories recently proposed are checked against perturbative analysis. The computations based on Feynman diagram expansion are performed up to the two-loop level. We obtain, good agreement.
Pilot Study for OCT Guided Design and Fit of a Prosthetic Device for Treatment of Corneal Disease.
Le, Hong-Gam T; Tang, Maolong; Ridges, Ryan; Huang, David; Jacobs, Deborah S
2012-01-01
Purpose. To assess optical coherence tomography (OCT) for guiding design and fit of a prosthetic device for corneal disease. Methods. A prototype time domain OCT scanner was used to image the anterior segment of patients fitted with large diameter (18.5-20 mm) prosthetic devices for corneal disease. OCT images were processed and analyzed to characterize corneal diameter, corneal sagittal height, scleral sagittal height, scleral toricity, and alignment of device. Within-subject variance of OCT-measured parameters was evaluated. OCT-measured parameters were compared with device parameters for each eye fitted. OCT image correspondence with ocular alignment and clinical fit was assessed. Results. Six eyes in 5 patients were studied. OCT measurement of corneal diameter (coefficient of variation, CV = 0.76%), cornea sagittal height (CV = 2.06%), and scleral sagittal height (CV = 3.39%) is highly repeatable within each subject. OCT image-derived measurements reveal strong correlation between corneal sagittal height and device corneal height (r = 0.975) and modest correlation between scleral and on-eye device toricity (r = 0.581). Qualitative assessment of a fitted device on OCT montages reveals correspondence with slit lamp images and clinical assessment of fit. Conclusions. OCT imaging of the anterior segment is suitable for custom design and fit of large diameter (18.5-20 mm) prosthetic devices used in the treatment of corneal disease.
Predicting K0Λ photoproduction observables by using the multipole approach
NASA Astrophysics Data System (ADS)
Mart, T.; Rusli, A.
2017-12-01
We present an isobar model for kaon photoproduction on the proton γ p\\to K^+Λ that can nicely reproduce the available experimental data from threshold up to W=2.0 GeV. The background amplitude of the model is constructed from a covariant Feynman diagrammatic method, whereas the resonance one is formulated by using the multipole approach. All unknown parameters in both background and resonance amplitudes are extracted by adjusting the calculated observables to experimental data. With the help of SU(3) isospin symmetry and some information obtained from the Particle Data Group we estimate the cross section and polarization observables for the neutral kaon photoproduction on the neutron γ n\\to K^0Λ. The result indicates no sharp peak in the K^0Λ total cross section. The predicted differential cross section exhibits resonance structures only at cosθ=-1. To obtain sizable observables the present work recommends measurement of the K^0Λ cross section with W≳ 1.70 GeV, whereas for the recoiled Λ polarization measurement with W≈ 1.65-1.90 GeV would be advised, since the predictions of existing models show a large variance at this kinematics. The predicted electric and magnetic multipoles are found to be mostly different from those obtained in previous works. For W=1.75 and 1.95 GeV it is found that most of the single and double polarization observables demonstrate large asymmetries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talamo, Alberto; Gohar, Yousry
2016-06-01
This report describes different methodologies to calculate the effective neutron multiplication factor of subcritical assemblies by processing the neutron detector signals using MATLAB scripts. The subcritical assembly can be driven either by a spontaneous fission neutron source (e.g. californium) or by a neutron source generated from the interactions of accelerated particles with target materials. In the latter case, when the particle accelerator operates in a pulsed mode, the signals are typically stored into two files. One file contains the time when neutron reactions occur and the other contains the times when the neutron pulses start. In both files, the timemore » is given by an integer representing the number of time bins since the start of the counting. These signal files are used to construct the neutron count distribution from a single neutron pulse. The built-in functions of MATLAB are used to calculate the effective neutron multiplication factor through the application of the prompt decay fitting or the area method to the neutron count distribution. If the subcritical assembly is driven by a spontaneous fission neutron source, then the effective multiplication factor can be evaluated either using the prompt neutron decay constant obtained from Rossi or Feynman distributions or the Modified Source Multiplication (MSM) method.« less
Pimenta, Manuel Antonio; Frasca, Luis Carlos; Lopes, Ricardo; Rivaldo, Elken
2015-08-01
Prosthetic crown fit to the walls of the tooth preparation may vary depending on the material used for crown fabrication. The purpose of this study was to compare the marginal and internal fit of crown copings fabricated from 3 different materials. The selected materials were zirconia (ZirkonZahn system, group Y-TZP), lithium disilicate (IPS e.max Press system, group LSZ), and nickel-chromium alloy (lost-wax casting, group NiCr). Five specimens of each material were seated on standard dies. An x-ray microtomography (micro-CT) device was used to obtain volumetric reconstructions of each specimen. Points for fit measurement were located in Adobe Photoshop, and measurements were obtained in the CTAn SkyScan software environment. Marginal fit was measured at 4 points and internal fit at 9 points in each coping. Mean measurements from the 3 groups were compared by analysis of variance (ANOVA) at the 5% significance level, and between-group differences were assessed with the Tukey range test. The nickel-chromium alloy exhibited the best marginal fit overall, comparable with zirconia and significantly different from lithium disilicate. Lithium disilicate exhibited the lowest mean values for internal fit, similar to zirconia and significantly different from the nickel-chrome alloy. The marginal and internal fit parameters of the 3 tested materials were within clinically acceptable range. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
The Social Cognitive Model of Job Satisfaction among Teachers: Testing and Validation
ERIC Educational Resources Information Center
Badri, Masood A.; Mohaidat, Jihad; Ferrandino, Vincent; El Mourad, Tarek
2013-01-01
The study empirically tests an integrative model of work satisfaction (0280, 0140, 0300 and 0255) in a sample of 5,022 teachers in Abu Dhabi in the United Arab Emirates. The study provided more support for the Lent and Brown (2006) model. Results revealed that this model was a strong fit for the data and accounted for 82% of the variance in work…
Agent-based Approaches to Dynamic Team Simulation
2008-09-01
Pittsburgh Reviewed by Paul Rosenfeld, Ph.D. Institute for Organizational Assessment Approved and released by David L. Alderton, Ph.D. Director...architects as artistic or of clerks as conventional. Steiner (1972) proposed a functional taxonomy recently adopted by Barrick, Stewart, Neubert ... Paul (1998) again found agreeableness to account for 8 percent of the variance in measures of fit to an organization. There appears to be better
Combining and Analyzing the Tanker and Aircrew Scheduling Heuristics
2003-03-01
56 Table 19. Breusch - Pagan Test Results for Number of Crews Required........................... 57 Table 20. Summary of Fit...residuals. This is subjectively tested by plotting the predicted response values against the residuals and affirmed by Breusch - Pagan test which is an...visible in the data suggesting that constant variance is satisfied. The objective test to verify this is the Breusch - Pagan test which calculates a p
Content Specificity of Expectancy Beliefs and Task Values in Elementary Physical Education
Chen, Ang; Martin, Robert; Ennis, Catherine D.; Sun, Haichun
2015-01-01
The curriculum may superimpose a content-specific context that mediates motivation (Bong, 2001). This study examined content specificity of the expectancy-value motivation in elementary school physical education. Students’ expectancy beliefs and perceived task values from a cardiorespiratory fitness unit, a muscular fitness unit, and a traditional skill/game unit were analyzed using constant comparison coding procedures, multivariate analysis of variance, χ2, and correlation analyses. There was no difference in the intrinsic interest value among the three content conditions. Expectancy belief, attainment, and utility values were significantly higher for the cardiorespiratory fitness curriculum. Correlations differentiated among the expectancy-value components of the content conditions, providing further evidence of content specificity in the expectancy-value motivation process. The findings suggest that expectancy beliefs and task values should be incorporated in the theoretical platform for curriculum development based on the learning outcomes that can be specified with enhanced motivation effect. PMID:18664044
Sewall Wright's equation Deltaq=(q(1-q) partial differentialw/ partial differentialq)/2w.
Edwards, A W
2000-02-01
An equation of Sewall Wright's expresses the change in the frequency of an allele under selection at a multiallelic locus as a function of the gradient of the mean fitness "surface" in the direction in which the relative proportions of the other alleles do not change. An attempt to derive this equation using conventional vector calculus shows that this description leads to a different equation and that the purported gradient in Wright's equation is not a gradient of the mean fitness surface except in the diallelic case, where the two equations are the same. It is further shown that if Fisher's angular transformation is applied to the diallelic case the genic variance is exactly equal to one-eighth of the square of the gradient of the mean fitness with respect to the transformed gene frequency. Copyright 2000 Academic Press.
AIDS-related health behavior: coping, protection motivation, and previous behavior.
Van der Velde, F W; Van der Pligt, J
1991-10-01
The purpose of this study was to examine Rogers' protection motivation theory and aspects of Janis and Mann's conflict theory in the context of AIDS-related health behavior. Subjects were 84 heterosexual men and women and 147 homosexual men with multiple sexual partners; LISREL's path-analysis techniques were used to evaluate the goodness of fit of the structural equation models. Protection motivation theory did fit the data but had considerably more explanatory power for heterosexual than for homosexual subjects (49 vs. 22%, respectively). When coping styles were added, different patterns of findings were found among both groups. Adding variables such as social norms and previous behavior increased the explained variance to 73% for heterosexual subjects and to 44% for homosexual subjects. It was concluded that although protection motivation theory did fit the data fairly adequately, expanding the theory with other variables--especially those related to previous behavior--could improve our understanding of AIDS-related health behavior.
Development and validation of the Simulation Learning Effectiveness Scale for nursing students.
Pai, Hsiang-Chu
2016-11-01
To develop and validate the Simulation Learning Effectiveness Scale, which is based on Bandura's social cognitive theory. A simulation programme is a significant teaching strategy for nursing students. Nevertheless, there are few evidence-based instruments that validate the effectiveness of simulation learning in Taiwan. This is a quantitative descriptive design. In Study 1, a nonprobability convenience sample of 151 student nurses completed the Simulation Learning Effectiveness Scale. Exploratory factor analysis was used to examine the factor structure of the instrument. In Study 2, which involved 365 student nurses, confirmatory factor analysis and structural equation modelling were used to analyse the construct validity of the Simulation Learning Effectiveness Scale. In Study 1, exploratory factor analysis yielded three components: self-regulation, self-efficacy and self-motivation. The three factors explained 29·09, 27·74 and 19·32% of the variance, respectively. The final 12-item instrument with the three factors explained 76·15% of variance. Cronbach's alpha was 0·94. In Study 2, confirmatory factor analysis identified a second-order factor termed Simulation Learning Effectiveness Scale. Goodness-of-fit indices showed an acceptable fit overall with the full model (χ 2 /df (51) = 3·54, comparative fit index = 0·96, Tucker-Lewis index = 0·95 and standardised root-mean-square residual = 0·035). In addition, teacher's competence was found to encourage learning, and self-reflection and insight were significantly and positively associated with Simulation Learning Effectiveness Scale. Teacher's competence in encouraging learning also was significantly and positively associated with self-reflection and insight. Overall, theses variable explained 21·9% of the variance in the student's learning effectiveness. The Simulation Learning Effectiveness Scale is a reliable and valid means to assess simulation learning effectiveness for nursing students. The Simulation Learning Effectiveness Scale can be used to examine nursing students' learning effectiveness and serve as a basis to improve student's learning efficiency through simulation programmes. Future implementation research that focuses on the relationship between learning effectiveness and nursing competence in nursing students is recommended. © 2016 John Wiley & Sons Ltd.
Probability Distribution Extraction from TEC Estimates based on Kernel Density Estimation
NASA Astrophysics Data System (ADS)
Demir, Uygar; Toker, Cenk; Çenet, Duygu
2016-07-01
Statistical analysis of the ionosphere, specifically the Total Electron Content (TEC), may reveal important information about its temporal and spatial characteristics. One of the core metrics that express the statistical properties of a stochastic process is its Probability Density Function (pdf). Furthermore, statistical parameters such as mean, variance and kurtosis, which can be derived from the pdf, may provide information about the spatial uniformity or clustering of the electron content. For example, the variance differentiates between a quiet ionosphere and a disturbed one, whereas kurtosis differentiates between a geomagnetic storm and an earthquake. Therefore, valuable information about the state of the ionosphere (and the natural phenomena that cause the disturbance) can be obtained by looking at the statistical parameters. In the literature, there are publications which try to fit the histogram of TEC estimates to some well-known pdf.s such as Gaussian, Exponential, etc. However, constraining a histogram to fit to a function with a fixed shape will increase estimation error, and all the information extracted from such pdf will continue to contain this error. In such techniques, it is highly likely to observe some artificial characteristics in the estimated pdf which is not present in the original data. In the present study, we use the Kernel Density Estimation (KDE) technique to estimate the pdf of the TEC. KDE is a non-parametric approach which does not impose a specific form on the TEC. As a result, better pdf estimates that almost perfectly fit to the observed TEC values can be obtained as compared to the techniques mentioned above. KDE is particularly good at representing the tail probabilities, and outliers. We also calculate the mean, variance and kurtosis of the measured TEC values. The technique is applied to the ionosphere over Turkey where the TEC values are estimated from the GNSS measurement from the TNPGN-Active (Turkish National Permanent GNSS Network) network. This study is supported by by TUBITAK 115E915 and Joint TUBITAK 114E092 and AS CR14/001 projects.
Veerkamp, Roel F; Bouwman, Aniek C; Schrooten, Chris; Calus, Mario P L
2016-12-01
Whole-genome sequence data is expected to capture genetic variation more completely than common genotyping panels. Our objective was to compare the proportion of variance explained and the accuracy of genomic prediction by using imputed sequence data or preselected SNPs from a genome-wide association study (GWAS) with imputed whole-genome sequence data. Phenotypes were available for 5503 Holstein-Friesian bulls. Genotypes were imputed up to whole-genome sequence (13,789,029 segregating DNA variants) by using run 4 of the 1000 bull genomes project. The program GCTA was used to perform GWAS for protein yield (PY), somatic cell score (SCS) and interval from first to last insemination (IFL). From the GWAS, subsets of variants were selected and genomic relationship matrices (GRM) were used to estimate the variance explained in 2087 validation animals and to evaluate the genomic prediction ability. Finally, two GRM were fitted together in several models to evaluate the effect of selected variants that were in competition with all the other variants. The GRM based on full sequence data explained only marginally more genetic variation than that based on common SNP panels: for PY, SCS and IFL, genomic heritability improved from 0.81 to 0.83, 0.83 to 0.87 and 0.69 to 0.72, respectively. Sequence data also helped to identify more variants linked to quantitative trait loci and resulted in clearer GWAS peaks across the genome. The proportion of total variance explained by the selected variants combined in a GRM was considerably smaller than that explained by all variants (less than 0.31 for all traits). When selected variants were used, accuracy of genomic predictions decreased and bias increased. Although 35 to 42 variants were detected that together explained 13 to 19% of the total variance (18 to 23% of the genetic variance) when fitted alone, there was no advantage in using dense sequence information for genomic prediction in the Holstein data used in our study. Detection and selection of variants within a single breed are difficult due to long-range linkage disequilibrium. Stringent selection of variants resulted in more biased genomic predictions, although this might be due to the training population being the same dataset from which the selected variants were identified.
Solar-cycle dependence of a model turbulence spectrum using IMP and ACE observations over 38 years
NASA Astrophysics Data System (ADS)
Burger, R. A.; Nel, A. E.; Engelbrecht, N. E.
2014-12-01
Ab initio modulation models require a number of turbulence quantities as input for any reasonable diffusion tensor. While turbulence transport models describe the radial evolution of such quantities, they in turn require observations in the inner heliosphere as input values. So far we have concentrated on solar minimum conditions (e.g. Engelbrecht and Burger 2013, ApJ), but are now looking at long-term modulation which requires turbulence data over at a least a solar magnetic cycle. As a start we analyzed 1-minute resolution data for the N-component of the magnetic field, from 1974 to 2012, covering about two solar magnetic cycles (initially using IMP and then ACE data). We assume a very simple three-stage power-law frequency spectrum, calculate the integral from the highest to the lowest frequency, and fit it to variances calculated with lags from 5 minutes to 80 hours. From the fit we then obtain not only the asymptotic variance at large lags, but also the spectral index of the inertial and the energy, as well as the breakpoint between the inertial and energy range (bendover scale) and between the energy and cutoff range (cutoff scale). All values given here are preliminary. The cutoff range is a constraint imposed in order to ensure a finite energy density; the spectrum is forced to be either flat or to decrease with decreasing frequency in this range. Given that cosmic rays sample magnetic fluctuations over long periods in their transport through the heliosphere, we average the spectra over at least 27 days. We find that the variance of the N-component has a clear solar cycle dependence, with smaller values (~6 nT2) during solar minimum and larger during solar maximum periods (~17 nT2), well correlated with the magnetic field magnitude (e.g. Smith et al. 2006, ApJ). Whereas the inertial range spectral index (-1.65 ± 0.06) does not show a significant solar cycle variation, the energy range index (-1.1 ± 0.3) seems to be anti-correlated with the variance (Bieber et al. 1993, JGR); both indices show close to normal distributions. In contrast, the variance (e.g. Burlaga and Ness, 1998, JGR), and both the bendover scale (see Ruiz et al. 2014, Solar Physics) and cutoff scale appear to be log-normal distributed.
Proof of a new colour decomposition for QCD amplitudes
Melia, Tom
2015-12-16
Recently, Johansson and Ochirov conjectured the form of a new colour decom-position for QCD tree-level amplitudes. This note provides a proof of that conjecture. The proof is based on ‘Mario World’ Feynman diagrams, which exhibit the hierarchical Dyck structure previously found to be very useful when dealing with multi-quark amplitudes.
Zanderighi, Giulia
2018-04-26
Modern QCD - Lecture 1 Starting from the QCD Lagrangian we will revisit some basic QCD concepts and derive fundamental properties like gauge invariance and isospin symmetry and will discuss the Feynman rules of the theory. We will then focus on the gauge group of QCD and derive the Casimirs CF and CA and some useful color identities.
Differential equations for loop integrals in Baikov representation
NASA Astrophysics Data System (ADS)
Bosma, Jorrit; Larsen, Kasper J.; Zhang, Yang
2018-05-01
We present a proof that differential equations for Feynman loop integrals can always be derived in Baikov representation without involving dimension-shift identities. We moreover show that in a large class of two- and three-loop diagrams it is possible to avoid squared propagators in the intermediate steps of setting up the differential equations.
Exotic Gauge Bosons in the 331 Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero, D.; Ravinez, O.; Diaz, H.
We analize the bosonic sector of the 331 model which contains exotic leptons, quarks and bosons (E,J,U,V) in order to satisfy the weak gauge SU(3){sub L} invariance. We develop the Feynman rules of the entire kinetic bosonic sector which will let us to compute some of the Z(0)' decays modes.
Using and Applying Mathematics
ERIC Educational Resources Information Center
Knight, Rupert
2011-01-01
The Nobel prize winning physicist Richard Feynman (2007) famously enthused about "the pleasure of finding things out". In day-to-day classroom life, however, it is easy to lose and undervalue this pleasure in the process, as opposed to products, of mathematics. Finding things out involves a journey and is often where the learning takes place.…
Loopedia, a database for loop integrals
NASA Astrophysics Data System (ADS)
Bogner, C.; Borowka, S.; Hahn, T.; Heinrich, G.; Jones, S. P.; Kerner, M.; von Manteuffel, A.; Michel, M.; Panzer, E.; Papara, V.
2018-04-01
Loopedia is a new database at loopedia.org for information on Feynman integrals, intended to provide both bibliographic information as well as results made available by the community. Its bibliometry is complementary to that of INSPIRE or arXiv in the sense that it admits searching for integrals by graph-theoretical objects, e.g. its topology.
Methods and Strategies: Much Ado about Nothing
ERIC Educational Resources Information Center
Smith, P. Sean; Plumley, Courtney L.; Hayes, Meredith L.
2017-01-01
This column provides ideas and techniques to enhance your science teaching. This month's issue discusses how children think about the small-particle model of matter. What Richard Feynman referred to as the "atomic hypothesis" is perhaps more familiar to us as the small-particle model of matter. In its most basic form, the model states…
ERIC Educational Resources Information Center
Hecht, Eugene
2007-01-01
When Feynman wrote, "It is important to realize that in physics today, we have no knowledge of what energy is," he was recognizing that although we have expressions for various forms of energy from kinetic to elastic, we seem to have no idea of what the all-encompassing notion of "energy" "is": This paper addresses that issue offering a definition…
NASA Astrophysics Data System (ADS)
Clegg, Brian
2018-04-01
Everybody knows that quantum physics is weird, right? Indeed, quantum physicist Richard Feynman once said in a lecture: "The theory of quantum electrodynamics describes Nature as absurd from the point of view of common sense." Beyond Weird: Why Everything You Thought You Knew About Quantum Physics is Different by Philip Ball presents a refreshing challenge to this viewpoint.
Path Integration on the Upper Half-Plane
NASA Astrophysics Data System (ADS)
Kubo, R.
1987-10-01
Feynman's path integral is considered on the Poincaré upper half-plane. It is shown that the fundermental solution to the heat equation partial f/partial t=Delta_{H}f can be expressed in terms of a path integral. A simple relation between the path integral and the Selberg trace formula is discussed briefly.
Proof of a new colour decomposition for QCD amplitudes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melia, Tom
Recently, Johansson and Ochirov conjectured the form of a new colour decom-position for QCD tree-level amplitudes. This note provides a proof of that conjecture. The proof is based on ‘Mario World’ Feynman diagrams, which exhibit the hierarchical Dyck structure previously found to be very useful when dealing with multi-quark amplitudes.
Quantum space foam and string theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nekrasov, Nikita
2006-11-03
String theory is originally defined as a modification of the Feynman rules in perturbation theory. It contains gravity in its perturbative spectrum. We review some recent developments which demonstrate that nonperturbative effects of quantum gravity, such as spacetime foam, arise in string theory as well.Prepared for the proceedings of 'Albert Einstein Century Conference' , Paris July 2005.
Stability and Response of Polygenic Traits to Stabilizing Selection and Mutation
de Vladar, Harold P.; Barton, Nick
2014-01-01
When polygenic traits are under stabilizing selection, many different combinations of alleles allow close adaptation to the optimum. If alleles have equal effects, all combinations that result in the same deviation from the optimum are equivalent. Furthermore, the genetic variance that is maintained by mutation–selection balance is 2μ/S per locus, where μ is the mutation rate and S the strength of stabilizing selection. In reality, alleles vary in their effects, making the fitness landscape asymmetric and complicating analysis of the equilibria. We show that that the resulting genetic variance depends on the fraction of alleles near fixation, which contribute by 2μ/S, and on the total mutational effects of alleles that are at intermediate frequency. The interplay between stabilizing selection and mutation leads to a sharp transition: alleles with effects smaller than a threshold value of 2μ/S remain polymorphic, whereas those with larger effects are fixed. The genetic load in equilibrium is less than for traits of equal effects, and the fitness equilibria are more similar. We find that if the optimum is displaced, alleles with effects close to the threshold value sweep first, and their rate of increase is bounded by μS. Long-term response leads in general to well-adapted traits, unlike the case of equal effects that often end up at a suboptimal fitness peak. However, the particular peaks to which the populations converge are extremely sensitive to the initial states and to the speed of the shift of the optimum trait value. PMID:24709633
Ochoa-Meza, Gerardo; Sierra, Juan Carlos; Pérez-Rodrigo, Carmen; Aranceta Bartrina, Javier; Esparza-Del Villar, Óscar A
2014-11-24
To test the goodness of fit of a Motivation-Ability-Opportunity model (MAO-model) to evaluate the observed variance in Mexican schoolchildren's preferences to eat fruit and daily fruit intake; also to evaluate the factorial invariance across the gender and type of population (urban and semi-urban) in which children reside. A model with seven constructs was designed from a validated questionnaire to assess preferences, cognitive abilities, attitude, modelling, perceived barriers, accessibility at school, accessibility at home, and fruit intake frequency. The instrument was administered in a representative sample of 1434 schoolchildren of 5th and 6th grade of primary school in a cross-sectional and ex post fact study conducted in 2013 in six cities of the State of Chihuahua, Mexico. The goodness of fit indexes was adequate for the MAO-model and explained 39% of the variance in preference to eat fruit. The structure of the model showed very good factor structure stability and the dimensions of the scale were equivalent in the different samples analyzed. The model analyzed with structural equation modeling showed a parsimonious model that can be used to explain the variation in fruit intake of 10 to 12 year old Mexican schoolchildren. The structure of the model was strictly invariant in the different samples analyzed and showed evidence of cross validation. Finally, implications about the modification model to fit data from scholar settings and guidelines for future research are discussed. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
Rasch Analysis of the 9-Item Shared Decision Making Questionnaire in Women With Breast Cancer.
Wu, Tzu-Yi; Chen, Cheng-Te; Huang, Yi-Jing; Hou, Wen-Hsuan; Wang, Jung-Der; Hsieh, Ching-Lin
2018-04-19
Shared decision making (SDM) is a best practice to help patients make optimal decisions by a process of healthcare, especially for women diagnosed with breast cancer and having heavy burden in long-term treatments. To promote successful SDM, it is crucial to assess the level of perceived involvement in SDM in women with breast cancer. The aims of this study were to apply Rasch analysis to examine the construct validity and person reliability of the 9-item Shared Decision Making Questionnaire (SDM-Q-9) in women with breast cancer. The construct validity of SDM-Q-9 was confirmed when the items fit the Rasch model's assumptions of unidimensionality: (1) infit and outfit mean square ranged from 0.6 to 1.4; (2) the unexplained variance of the first dimension of the principal component analysis was less than 20%. Person reliability was calculated. A total of 212 participants were recruited in this study. Item 1 did not fit the model's assumptions and was deleted. The unidimensionality of the remaining 8 items (SDM-Q-8) was supported with good item fit (infit and outfit mean square ranging from 0.6 to 1.3) and very low unexplained variance of the first dimension (5.3%) of the principal component analysis. The person reliability of the SDM-Q-8 was 0.90. The SDM-Q-8 was unidimensional and had good person reliability in women with breast cancer. The SDM-Q-8 has shown its potential for assessing the level of perceived involvement in SDM in women with breast cancer for both research and clinical purposes.
NASA Astrophysics Data System (ADS)
Vlasiuk, Maryna; Frascoli, Federico; Sadus, Richard J.
2016-09-01
The thermodynamic, structural, and vapor-liquid equilibrium properties of neon are comprehensively studied using ab initio, empirical, and semi-classical intermolecular potentials and classical Monte Carlo simulations. Path integral Monte Carlo simulations for isochoric heat capacity and structural properties are also reported for two empirical potentials and one ab initio potential. The isobaric and isochoric heat capacities, thermal expansion coefficient, thermal pressure coefficient, isothermal and adiabatic compressibilities, Joule-Thomson coefficient, and the speed of sound are reported and compared with experimental data for the entire range of liquid densities from the triple point to the critical point. Lustig's thermodynamic approach is formally extended for temperature-dependent intermolecular potentials. Quantum effects are incorporated using the Feynman-Hibbs quantum correction, which results in significant improvement in the accuracy of predicted thermodynamic properties. The new Feynman-Hibbs version of the Hellmann-Bich-Vogel potential predicts the isochoric heat capacity to an accuracy of 1.4% over the entire range of liquid densities. It also predicts other thermodynamic properties more accurately than alternative intermolecular potentials.
Smith, Kyle K.G.; Poulsen, Jens Aage; Nyman, Gunnar; ...
2015-06-30
Here, we apply the Feynman-Kleinert Quasi-Classical Wigner (FK-QCW) method developed in our previous work [Smith et al., J. Chem. Phys. 142, 244112 (2015)] for the determination of the dynamic structure factor of liquid para-hydrogen and ortho-deuterium at state points of (T = 20.0 K, n = 21.24 nm -3) and (T = 23.0 K, n = 24.61 nm -3), respectively. When applied to this challenging system, it is shown that this new FK-QCW method consistently reproduces the experimental dynamic structure factor reported by Smith et al. [J. Chem. Phys. 140, 034501 (2014)] for all momentum transfers considered. Moreover, this showsmore » that FK-QCW provides a substantial improvement over the Feynman-Kleinert linearized path-integral method, in which purely classical dynamics are used. Furthermore, for small momentum transfers, it is shown that FK-QCW provides nearly the same results as ring-polymer molecular dynamics (RPMD), thus suggesting that FK-QCW provides a potentially more appealing algorithm than RPMD since it is not formally limited to correlation functions involving linear operators.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Kyle K.G.; Poulsen, Jens Aage; Nyman, Gunnar
Here, we apply the Feynman-Kleinert Quasi-Classical Wigner (FK-QCW) method developed in our previous work [Smith et al., J. Chem. Phys. 142, 244112 (2015)] for the determination of the dynamic structure factor of liquid para-hydrogen and ortho-deuterium at state points of (T = 20.0 K, n = 21.24 nm -3) and (T = 23.0 K, n = 24.61 nm -3), respectively. When applied to this challenging system, it is shown that this new FK-QCW method consistently reproduces the experimental dynamic structure factor reported by Smith et al. [J. Chem. Phys. 140, 034501 (2014)] for all momentum transfers considered. Moreover, this showsmore » that FK-QCW provides a substantial improvement over the Feynman-Kleinert linearized path-integral method, in which purely classical dynamics are used. Furthermore, for small momentum transfers, it is shown that FK-QCW provides nearly the same results as ring-polymer molecular dynamics (RPMD), thus suggesting that FK-QCW provides a potentially more appealing algorithm than RPMD since it is not formally limited to correlation functions involving linear operators.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Kyle K. G., E-mail: kylesmith@utexas.edu; Poulsen, Jens Aage, E-mail: jens72@chem.gu.se; Nyman, Gunnar, E-mail: nyman@chem.gu.se
We apply the Feynman-Kleinert Quasi-Classical Wigner (FK-QCW) method developed in our previous work [Smith et al., J. Chem. Phys. 142, 244112 (2015)] for the determination of the dynamic structure factor of liquid para-hydrogen and ortho-deuterium at state points of (T = 20.0 K, n = 21.24 nm{sup −3}) and (T = 23.0 K, n = 24.61 nm{sup −3}), respectively. When applied to this challenging system, it is shown that this new FK-QCW method consistently reproduces the experimental dynamic structure factor reported by Smith et al. [J. Chem. Phys. 140, 034501 (2014)] for all momentum transfers considered. This shows that FK-QCWmore » provides a substantial improvement over the Feynman-Kleinert linearized path-integral method, in which purely classical dynamics are used. Furthermore, for small momentum transfers, it is shown that FK-QCW provides nearly the same results as ring-polymer molecular dynamics (RPMD), thus suggesting that FK-QCW provides a potentially more appealing algorithm than RPMD since it is not formally limited to correlation functions involving linear operators.« less
Possible Quantum Absorber Effects in Cortical Synchronization
NASA Astrophysics Data System (ADS)
Kämpf, Uwe
The Wheeler-Feynman transactional "absorber" approach was proposed originally to account for anomalous resonance coupling between spatio-temporally distant measurement partners in entangled quantum states of so-called Einstein-Podolsky-Rosen paradoxes, e.g. of spatio-temporal non-locality, quantum teleportation, etc. Applied to quantum brain dynamics, however, this view provides an anticipative resonance coupling model for aspects of cortical synchronization and recurrent visual action control. It is proposed to consider the registered activation patterns of neuronal loops in so-called synfire chains not as a result of retarded brain communication processes, but rather as surface effects of a system of standing waves generated in the depth of visual processing. According to this view, they arise from a counterbalance between the actual input's delayed bottom-up data streams and top-down recurrent information-processing of advanced anticipative signals in a Wheeler-Feynman-type absorber mode. In the framework of a "time-loop" model, findings about mirror neurons in the brain cortex are suggested to be at least partially associated with temporal rather than spatial mirror functions of visual processing, similar to phase conjugate adaptive resonance-coupling in nonlinear optics.
Smith, Kyle K G; Poulsen, Jens Aage; Nyman, Gunnar; Cunsolo, Alessandro; Rossky, Peter J
2015-06-28
We apply the Feynman-Kleinert Quasi-Classical Wigner (FK-QCW) method developed in our previous work [Smith et al., J. Chem. Phys. 142, 244112 (2015)] for the determination of the dynamic structure factor of liquid para-hydrogen and ortho-deuterium at state points of (T = 20.0 K, n = 21.24 nm(-3)) and (T = 23.0 K, n = 24.61 nm(-3)), respectively. When applied to this challenging system, it is shown that this new FK-QCW method consistently reproduces the experimental dynamic structure factor reported by Smith et al. [J. Chem. Phys. 140, 034501 (2014)] for all momentum transfers considered. This shows that FK-QCW provides a substantial improvement over the Feynman-Kleinert linearized path-integral method, in which purely classical dynamics are used. Furthermore, for small momentum transfers, it is shown that FK-QCW provides nearly the same results as ring-polymer molecular dynamics (RPMD), thus suggesting that FK-QCW provides a potentially more appealing algorithm than RPMD since it is not formally limited to correlation functions involving linear operators.
Evans, Margaret E K; Ferrière, Régis; Kane, Michael J; Venable, D Lawrence
2007-02-01
Bet hedging is one solution to the problem of an unpredictably variable environment: fitness in the average environment is sacrificed in favor of lower variation in fitness if this leads to higher long-run stochastic mean fitness. While bet hedging is an important concept in evolutionary ecology, empirical evidence that it occurs is scant. Here we evaluate whether bet hedging occurs via seed banking in natural populations of two species of desert evening primroses (Oenothera, Onagraceae), one annual and one perennial. Four years of data on plants and 3 years of data on seeds yielded two transitions for the entire life cycle. One year was exceptionally dry, leading to reproductive failure in the sample areas, and the other was above average in precipitation, leading to reproductive success in four of five populations. Stochastic simulations of population growth revealed patterns indicative of bet hedging via seed banking, particularly in the annual populations: variance in fitness and fitness in the average environment were lower with seed banking than without, whereas long-run stochastic mean fitness was higher with seed banking than without across a wide range of probabilities of the wet year. This represents a novel, unusually rigorous demonstration of bet hedging from field data.
Cardiovascular fitness in obese versus nonobese 8-11-year-old boys and girls.
Mastrangelo, M Alysia; Chaloupka, Edward C; Rattigan, Peter
2008-09-01
The purpose of this study was to compare cardiovascular fitness between obese and nonobese children. Based on body mass index, 118 were classified as obese (boys [OB] = 62, girls [OG] = 56), while 421 were nonobese (boys [NOB] = 196, girls [NOG] = 225). Cardiovascular fitness was determined by a 1-mile [1.6 km] run/walk (MRW) and estimated peak oxygen uptake (VO2peak) and analyzed using two-way analyses of variance (Gender x Obese/Nonobese). MRW times were significantly faster (p < .05) for the NOB (10 min 34 s) compared to the OB (13 min 8 s) and the NOG (13 min 15 s.) compared to the OG (14 min 44 s.). Predicted VO2peak values (mL x kg(-1) x min(-1)) were significantly higher (p < .05) for the NOB (48.29) compared to the OB (41.56) and the NOG (45.99) compared to the OG (42.13). MRW was compared between obese and nonobese participants on the President's Challenge (2005), the National Children and Youth Fitness Study, and FITNESSGRAM HFZ standards. The nonobese boys and girls scored higher on all three, exhibiting better cardiovascular fitness as compared to obese counterparts.
Modality-Driven Classification and Visualization of Ensemble Variance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bensema, Kevin; Gosink, Luke; Obermaier, Harald
Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no informationmore » about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.« less
Bates, S; Jonaitis, D; Nail, S
2013-10-01
Total X-ray Powder Diffraction Analysis (TXRPD) using transmission geometry was able to observe significant variance in measured powder patterns for sucrose lyophilizates with differing residual water contents. Integrated diffraction intensity corresponding to the observed variances was found to be linearly correlated to residual water content as measured by an independent technique. The observed variance was concentrated in two distinct regions of the lyophilizate powder pattern, corresponding to the characteristic sucrose matrix double halo and the high angle diffuse region normally associated with free-water. Full pattern fitting of the lyophilizate powder patterns suggested that the high angle variance was better described by the characteristic diffraction profile of a concentrated sucrose/water system rather than by the free-water diffraction profile. This suggests that the residual water in the sucrose lyophilizates is intimately mixed at the molecular level with sucrose molecules forming a liquid/solid solution. The bound nature of the residual water and its impact on the sucrose matrix gives an enhanced diffraction response between 3.0 and 3.5 beyond that expected for free-water. The enhanced diffraction response allows semi-quantitative analysis of residual water contents within the studied sucrose lyophilizates to levels below 1% by weight. Copyright © 2013 Elsevier B.V. All rights reserved.
Psychopathic personality development from ages 9 to 18: Genes and environment.
Tuvblad, Catherine; Wang, Pan; Bezdjian, Serena; Raine, Adrian; Baker, Laura A
2016-02-01
The genetic and environmental etiology of individual differences was examined in initial level and change in psychopathic personality from ages 9 to 18 years. A piecewise growth curve model, in which the first change score (G1) influenced all ages (9-10, 11-13, 14-15, and 16-18 years) and the second change score (G2) only influenced ages 14-15 and 16-18 years, fit the data better did than the standard single slope model, suggesting a turning point from childhood to adolescence. The results indicated that variations in levels and both change scores were mainly due to genetic (A) and nonshared environmental (E) influences (i.e., AE structure for G0, G1, and G2). No sex differences were found except on the mean values of level and change scores. Based on caregiver ratings, about 81% of variance in G0, 89% of variance in G1, and 94% of variance in G2 were explained by genetic factors, whereas for youth self-reports, these three proportions were 94%, 71%, and 66%, respectively. The larger contribution of genetic variance and covariance in caregiver ratings than in youth self-reports may suggest that caregivers considered the changes in their children to be more similar as compared to how the children viewed themselves.
Young, Andrew J; Bennett, Nigel C
2013-01-01
In cooperatively breeding mammals and birds, intra-sexual reproductive competition among females may often render variance in reproductive success higher among females than males, leading to the prediction that intra-sexual selection in such species may have yielded the differential exaggeration of competitive traits among females. However, evidence to date suggests that female-biased reproductive variance in such species is rarely accompanied by female-biased sexual dimorphisms. We illustrate the problem with data from wild Damaraland mole-rat, Fukomys damarensis, societies: the variance in lifetime reproductive success among females appears to be higher than that among males, yet males grow faster, are much heavier as adults and sport larger skulls and incisors (the weapons used for fighting) for their body lengths than females, suggesting that intra-sexual selection has nevertheless acted more strongly on the competitive traits of males. We then consider potentially general mechanisms that could explain these disparities by tempering the relative intensity of selection for competitive trait exaggeration among females in cooperative breeders. Key among these may be interactions with kin selection that could nevertheless render the variance in inclusive fitness lower among females than males, and fundamental aspects of the reproductive biology of females that may leave reproductive conflict among females more readily resolved without overt physical contests.
Harries, Priscilla; Davies, Miranda
2015-01-01
Introduction As people with a range of disabilities strive to increase their community mobility, occupational therapy driver assessors are increasingly required to make complex recommendations regarding fitness-to-drive. However, very little is known about how therapists use information to make decisions. The aim of this study was to model how experienced occupational therapy driver assessors weight and combine information when making fitness-to-drive recommendations and establish their level of decision agreement. Method Using Social Judgment Theory method, this study examined how 45 experienced occupational therapy driver assessors from the UK, Australia and New Zealand made fitness-to-drive recommendations for a series of 64 case scenarios. Participants completed the task on a dedicated website, and data were analysed using discriminant function analysis and an intraclass correlation coefficient. Results Accounting for 87% of the variance, the cues central to the fitness-to-drive recommendations made by assessors are the client’s physical skills, cognitive and perceptual skills, road law craft skills, vehicle handling skills and the number of driving instructor interventions. Agreement (consensus) between fitness-to-drive recommendations was very high: intraclass correlation coefficient = .97, 95% confidence interval .96–.98). Conclusion Findings can be used by both experienced and novice driver assessors to reflect on and strengthen the fitness-to-drive recommendations made to clients. PMID:26435572
2013-01-01
Background Patients with schizophrenia report muscle weakness. The relation of this muscle weakness with performing daily life activities such as walking is however not yet studied. The aim of this study was to quantify walking capacity and health related muscular fitness in patients with schizophrenia compared with age-, gender and body mass index (BMI)-matched healthy controls. Secondly, we identified variables that could explain the variability in walking capacity and in health related muscular fitness in patients with schizophrenia. Methods A total of 100 patients with schizophrenia and 40 healthy volunteers were initially screened. Eighty patients with schizophrenia (36.8±10.0 years) and the 40 age-, gender- and body mass index (BMI)-matched healthy volunteers (37.1±10.3 years) were finally included. All participants performed a standing broad jump test (SBJ) and a six-minute walk test (6MWT) and filled out the International Physical Activity Questionnaire. Patients additionally had a fasting metabolic laboratory screening and were assessed for psychiatric symptoms. Results Patients with schizophrenia did have lower 6MWT (17.9%, p<0.001) [effect size (ES)=−1.01] and SBJ (14.1%, p<0.001) (ES=−0.57) scores. Patients were also less physically active (1291.0±1201.8 metabolic equivalent-minutes/week versus 2463.1±1365.3, p<0.001) (ES=−0.91) than controls. Schizophrenia patients with metabolic syndrome (MetS) (35%) had a 23.9% lower (p<0.001) SBJ-score and 22.4% (p<0.001) lower 6MWT-score than those without MetS. In multiple regression analysis, 71.8% of the variance in 6MWT was explained by muscular fitness, BMI, presence of MetS and physical activity participation, while 53.9% of the variance in SBJ-score was explained by age, illness duration, BMI and physical activity participation. Conclusions The walking capacity and health-related muscular fitness are impaired in patients with schizophrenia and both should be a major focus in daily clinical practice and future research. PMID:23286356
Pereira, Sara; Katzmarzyk, Peter T; Gomes, Thayse Natacha; Souza, Michele; Chaves, Raquel N; Santos, Fernanda K Dos; Santos, Daniel; Hedeker, Donald; Maia, José A R
2017-06-01
Somatotype is a complex trait influenced by different genetic and environmental factors as well as by other covariates whose effects are still unclear. To (1) estimate siblings' resemblance in their general somatotype; (2) identify sib-pair (brother-brother (BB), sister-sister (SS), brother-sister (BS)) similarities in individual somatotype components; (3) examine the degree to which between and within variances differ among sib-ships; and (4) investigate the effects of physical activity (PA) and family socioeconomic status (SES) on these relationships. The sample comprises 1058 Portuguese siblings (538 females) aged 9-20 years. Somatotype was calculated using the Health-Carter method, while PA and SES information was obtained by questionnaire. Multi-level modelling was done in SuperMix software. Older subjects showed the lowest values for endomorphy and mesomorphy, but the highest values for ectomorphy; and more physically active subjects showed the highest values for mesomorphy. In general, the familiality of somatotype was moderate (ρ = 0.35). Same-sex siblings had the strongest resemblance (endomorphy: ρ SS > ρ BB > ρ BS ; mesomorphy: ρ BB = ρ SS > ρ BS ; ectomorphy: ρ BB > ρ SS > ρ BS ). For the ectomorphy and mesomorphy components, BS pairs showed the highest between sib-ship variance, but the lowest within sib-ship variance; while for endomorphy BS showed the lowest between and within sib-ship variances. These results highlight the significant familial effects on somatotype and the complexity of the role of familial resemblance in explaining variance in somatotypes.
Vista, Alvin; Care, Esther
2011-06-01
Research on gender differences in intelligence has focused mostly on samples from Western countries and empirical evidence on gender differences from Southeast Asia is relatively sparse. This article presents results on gender differences in variance and means on a non-verbal intelligence test using a national sample of public school students from the Philippines. More than 2,700 sixth graders from public schools across the country were tested with the Naglieri Non-verbal Ability Test (NNAT). Variance ratios (VRs) and log-transformed VRs were computed. Proportion ratios for each of the ability levels were also calculated and a chi-square goodness-of-fit test was performed. An analysis of variance was performed to determine the overall gender difference in mean scores as well as within each of three age subgroups. Our data show non-existent or trivial gender difference in mean scores. However, the tails of the distributions show differences between the males and females, with greater variability among males in the upper half of the distribution and greater variability among females in the lower half of the distribution. Descriptions of the results and their implications are discussed. Results on mean score differences support the hypothesis that there are no significant gender differences in cognitive ability. The unusual results regarding differences in variance and the male-female proportion in the tails require more complex investigations. ©2010 The British Psychological Society.
Berger, David; You, Tao; Minano, Maravillas R; Grieshop, Karl; Lind, Martin I; Arnqvist, Göran; Maklakov, Alexei A
2016-05-13
Intralocus sexual conflict, arising from selection for different alleles at the same locus in males and females, imposes a constraint on sex-specific adaptation. Intralocus sexual conflict can be alleviated by the evolution of sex-limited genetic architectures and phenotypic expression, but pleiotropic constraints may hinder this process. Here, we explored putative intralocus sexual conflict and genetic (co)variance in a poorly understood behavior with near male-limited expression. Same-sex sexual behaviors (SSBs) generally do not conform to classic evolutionary models of adaptation but are common in male animals and have been hypothesized to result from perception errors and selection for high male mating rates. However, perspectives incorporating sex-specific selection on genes shared by males and females to explain the expression and evolution of SSBs have largely been neglected. We performed two parallel sex-limited artificial selection experiments on SSB in male and female seed beetles, followed by sex-specific assays of locomotor activity and male sex recognition (two traits hypothesized to be functionally related to SSB) and adult reproductive success (allowing us to assess fitness consequences of genetic variance in SSB and its correlated components). Our experiments reveal both shared and sex-limited genetic variance for SSB. Strikingly, genetically correlated responses in locomotor activity and male sex-recognition were associated with sexually antagonistic fitness effects, but these effects differed qualitatively between male and female selection lines, implicating intralocus sexual conflict at both male- and female-specific genetic components underlying SSB. Our study provides experimental support for the hypothesis that widespread pleiotropy generates pervasive intralocus sexual conflict governing the expression of SSBs, suggesting that SSB in one sex can occur due to the expression of genes that carry benefits in the other sex.
Noble, Luke M; Chelo, Ivo; Guzella, Thiago; Afonso, Bruno; Riccardi, David D; Ammerman, Patrick; Dayarian, Adel; Carvalho, Sara; Crist, Anna; Pino-Querido, Ania; Shraiman, Boris; Rockman, Matthew V; Teotónio, Henrique
2017-12-01
Understanding the genetic basis of complex traits remains a major challenge in biology. Polygenicity, phenotypic plasticity, and epistasis contribute to phenotypic variance in ways that are rarely clear. This uncertainty can be problematic for estimating heritability, for predicting individual phenotypes from genomic data, and for parameterizing models of phenotypic evolution. Here, we report an advanced recombinant inbred line (RIL) quantitative trait locus mapping panel for the hermaphroditic nematode Caenorhabditis elegans , the C. elegans multiparental experimental evolution (CeMEE) panel. The CeMEE panel, comprising 507 RILs at present, was created by hybridization of 16 wild isolates, experimental evolution for 140-190 generations, and inbreeding by selfing for 13-16 generations. The panel contains 22% of single-nucleotide polymorphisms known to segregate in natural populations, and complements existing C. elegans mapping resources by providing fine resolution and high nucleotide diversity across > 95% of the genome. We apply it to study the genetic basis of two fitness components, fertility and hermaphrodite body size at time of reproduction, with high broad-sense heritability in the CeMEE. While simulations show that we should detect common alleles with additive effects as small as 5%, at gene-level resolution, the genetic architectures of these traits do not feature such alleles. We instead find that a significant fraction of trait variance, approaching 40% for fertility, can be explained by sign epistasis with main effects below the detection limit. In congruence, phenotype prediction from genomic similarity, while generally poor ([Formula: see text]), requires modeling epistasis for optimal accuracy, with most variance attributed to the rapidly evolving chromosome arms. Copyright © 2017 by the Genetics Society of America.
Noble, Luke M.; Chelo, Ivo; Guzella, Thiago; Afonso, Bruno; Riccardi, David D.; Ammerman, Patrick; Dayarian, Adel; Carvalho, Sara; Crist, Anna; Pino-Querido, Ania; Shraiman, Boris; Rockman, Matthew V.; Teotónio, Henrique
2017-01-01
Understanding the genetic basis of complex traits remains a major challenge in biology. Polygenicity, phenotypic plasticity, and epistasis contribute to phenotypic variance in ways that are rarely clear. This uncertainty can be problematic for estimating heritability, for predicting individual phenotypes from genomic data, and for parameterizing models of phenotypic evolution. Here, we report an advanced recombinant inbred line (RIL) quantitative trait locus mapping panel for the hermaphroditic nematode Caenorhabditis elegans, the C. elegans multiparental experimental evolution (CeMEE) panel. The CeMEE panel, comprising 507 RILs at present, was created by hybridization of 16 wild isolates, experimental evolution for 140–190 generations, and inbreeding by selfing for 13–16 generations. The panel contains 22% of single-nucleotide polymorphisms known to segregate in natural populations, and complements existing C. elegans mapping resources by providing fine resolution and high nucleotide diversity across > 95% of the genome. We apply it to study the genetic basis of two fitness components, fertility and hermaphrodite body size at time of reproduction, with high broad-sense heritability in the CeMEE. While simulations show that we should detect common alleles with additive effects as small as 5%, at gene-level resolution, the genetic architectures of these traits do not feature such alleles. We instead find that a significant fraction of trait variance, approaching 40% for fertility, can be explained by sign epistasis with main effects below the detection limit. In congruence, phenotype prediction from genomic similarity, while generally poor (r2<10%), requires modeling epistasis for optimal accuracy, with most variance attributed to the rapidly evolving chromosome arms. PMID:29066469
Stevens, Mark I; Hogendoorn, Katja; Schwarz, Michael P
2007-08-29
The Central Limit Theorem (CLT) is a statistical principle that states that as the number of repeated samples from any population increase, the variance among sample means will decrease and means will become more normally distributed. It has been conjectured that the CLT has the potential to provide benefits for group living in some animals via greater predictability in food acquisition, if the number of foraging bouts increases with group size. The potential existence of benefits for group living derived from a purely statistical principle is highly intriguing and it has implications for the origins of sociality. Here we show that in a social allodapine bee the relationship between cumulative food acquisition (measured as total brood weight) and colony size accords with the CLT. We show that deviations from expected food income decrease with group size, and that brood weights become more normally distributed both over time and with increasing colony size, as predicted by the CLT. Larger colonies are better able to match egg production to expected food intake, and better able to avoid costs associated with producing more brood than can be reared while reducing the risk of under-exploiting the food resources that may be available. These benefits to group living derive from a purely statistical principle, rather than from ecological, ergonomic or genetic factors, and could apply to a wide variety of species. This in turn suggests that the CLT may provide benefits at the early evolutionary stages of sociality and that evolution of group size could result from selection on variances in reproductive fitness. In addition, they may help explain why sociality has evolved in some groups and not others.
General Methods for Evolutionary Quantitative Genetic Inference from Generalized Mixed Models.
de Villemereuil, Pierre; Schielzeth, Holger; Nakagawa, Shinichi; Morrissey, Michael
2016-11-01
Methods for inference and interpretation of evolutionary quantitative genetic parameters, and for prediction of the response to selection, are best developed for traits with normal distributions. Many traits of evolutionary interest, including many life history and behavioral traits, have inherently nonnormal distributions. The generalized linear mixed model (GLMM) framework has become a widely used tool for estimating quantitative genetic parameters for nonnormal traits. However, whereas GLMMs provide inference on a statistically convenient latent scale, it is often desirable to express quantitative genetic parameters on the scale upon which traits are measured. The parameters of fitted GLMMs, despite being on a latent scale, fully determine all quantities of potential interest on the scale on which traits are expressed. We provide expressions for deriving each of such quantities, including population means, phenotypic (co)variances, variance components including additive genetic (co)variances, and parameters such as heritability. We demonstrate that fixed effects have a strong impact on those parameters and show how to deal with this by averaging or integrating over fixed effects. The expressions require integration of quantities determined by the link function, over distributions of latent values. In general cases, the required integrals must be solved numerically, but efficient methods are available and we provide an implementation in an R package, QGglmm. We show that known formulas for quantities such as heritability of traits with binomial and Poisson distributions are special cases of our expressions. Additionally, we show how fitted GLMM can be incorporated into existing methods for predicting evolutionary trajectories. We demonstrate the accuracy of the resulting method for evolutionary prediction by simulation and apply our approach to data from a wild pedigreed vertebrate population. Copyright © 2016 de Villemereuil et al.
Seasonally adjusted birth frequencies follow the Poisson distribution.
Barra, Mathias; Lindstrøm, Jonas C; Adams, Samantha S; Augestad, Liv A
2015-12-15
Variations in birth frequencies have an impact on activity planning in maternity wards. Previous studies of this phenomenon have commonly included elective births. A Danish study of spontaneous births found that birth frequencies were well modelled by a Poisson process. Somewhat unexpectedly, there were also weekly variations in the frequency of spontaneous births. Another study claimed that birth frequencies follow the Benford distribution. Our objective was to test these results. We analysed 50,017 spontaneous births at Akershus University Hospital in the period 1999-2014. To investigate the Poisson distribution of these births, we plotted their variance over a sliding average. We specified various Poisson regression models, with the number of births on a given day as the outcome variable. The explanatory variables included various combinations of years, months, days of the week and the digit sum of the date. The relationship between the variance and the average fits well with an underlying Poisson process. A Benford distribution was disproved by a goodness-of-fit test (p < 0.01). The fundamental model with year and month as explanatory variables is significantly improved (p < 0.001) by adding day of the week as an explanatory variable. Altogether 7.5% more children are born on Tuesdays than on Sundays. The digit sum of the date is non-significant as an explanatory variable (p = 0.23), nor does it increase the explained variance. INERPRETATION: Spontaneous births are well modelled by a time-dependent Poisson process when monthly and day-of-the-week variation is included. The frequency is highest in summer towards June and July, Friday and Tuesday stand out as particularly busy days, and the activity level is at its lowest during weekends.
Predictors of 2,4-dichlorophenoxyacetic acid exposure among herbicide applicators
BHATTI, PARVEEN; BLAIR, AARON; BELL, ERIN M.; ROTHMAN, NATHANIEL; LAN, QING; BARR, DANA B.; NEEDHAM, LARRY L.; PORTENGEN, LUTZEN; FIGGS, LARRY W.; VERMEULEN, ROEL
2009-01-01
To determine the major factors affecting the urinary levels of 2,4-dichlorophenoxyacetic acid (2,4-D) among county noxious weed applicators in Kansas, we used a regression technique that accounted for multiple days of exposure. We collected 136 12-h urine samples from 31 applicators during the course of two spraying seasons (April to August of 1994 and 1995). Using mixed-effects models, we constructed exposure models that related urinary 2,4-D measurements to weighted self-reported work activities from daily diaries collected over 5 to 7 days before the collection of the urine sample. Our primary weights were based on an earlier pharmacokinetic analysis of turf applicators; however, we examined a series of alternative weighting schemes to assess the impact of the specific weights and the number of days before urine sample collection that were considered. The derived models accounting for multiple days of exposure related to a single urine measurement seemed robust with regard to the exact weights, but less to the number of days considered; albeit the determinants from the primary model could be fitted with marginal losses of fit to the data from the other weighting schemes that considered a different numbers of days. In the primary model, the total time of all activities (spraying, mixing, other activities), spraying method, month of observation, application concentration, and wet gloves were significant determinants of urinary 2,4-D concentration and explained 16% of the between-worker variance and 23% of the within-worker variance of urinary 2,4-D levels. As a large proportion of the variance remained unexplained, further studies should be conducted to try to systematically assess other exposure determinants. PMID:19319162
An Alternative to the Breeder’s and Lande’s Equations
Houchmandzadeh, Bahram
2013-01-01
The breeder’s equation is a cornerstone of quantitative genetics, widely used in evolutionary modeling. Noting the mean phenotype in parental, selected parents, and the progeny by E(Z0), E(ZW), and E(Z1), this equation relates response to selection R = E(Z1) − E(Z0) to the selection differential S = E(ZW) − E(Z0) through a simple proportionality relation R = h2S, where the heritability coefficient h2 is a simple function of genotype and environment factors variance. The validity of this relation relies strongly on the normal (Gaussian) distribution of the parent genotype, which is an unobservable quantity and cannot be ascertained. In contrast, we show here that if the fitness (or selection) function is Gaussian with mean μ, an alternative, exact linear equation of the form R′ = j2S′ can be derived, regardless of the parental genotype distribution. Here R′ = E(Z1) − μ and S′ = E(ZW) − μ stand for the mean phenotypic lag with respect to the mean of the fitness function in the offspring and selected populations. The proportionality coefficient j2 is a simple function of selection function and environment factors variance, but does not contain the genotype variance. To demonstrate this, we derive the exact functional relation between the mean phenotype in the selected and the offspring population and deduce all cases that lead to a linear relation between them. These results generalize naturally to the concept of G matrix and the multivariate Lande’s equation Δz¯=GP−1S. The linearity coefficient of the alternative equation are not changed by Gaussian selection. PMID:24212080
Wade, Tracey D; Hansell, Narelle K; Crosby, Ross D; Bryant-Waugh, Rachel; Treasure, Janet; Nixon, Reginald; Byrne, Susan; Martin, Nicholas G
2013-02-01
The goal of the current study was to examine whether genetic and environmental influences on an important risk factor for disordered eating, weight and shape concern, remained stable over adolescence. This stability was assessed in 2 ways: whether new sources of latent variance were introduced over development and whether the magnitude of variance contributing to the risk factor changed. We examined an 8-item WSC subscale derived from the Eating Disorder Examination (EDE) using telephone interviews with female adolescents. From 3 waves of data collected from female-female same-sex twin pairs from the Australian Twin Registry, a subset of the data (which included 351 pairs at Wave 1) was used to examine 3 age cohorts: 12 to 13, 13 to 15, and 14 to 16 years. The best-fitting model contained genetic and environmental influences, both shared and nonshared. Biometric model fitting indicated that nonshared environmental influences were largely specific to each age cohort, and results suggested that latent shared environmental and genetic influences that were influential at 12 to 13 years continued to contribute to subsequent age cohorts, with independent sources of both emerging at ages 13 to 15. The magnitude of all 3 latent influences could be constrained to be the same across adolescence. Ages 13 to 15 were indicated as a time of risk for the development of high levels of WSC, given that most specific environmental risk factors were significant at this time (e.g., peer teasing about weight, adverse life events), and indications of the emergence of new sources of latent genetic and environmental variance over this period. 2013 APA, all rights reserved
Mulder, Han A; Rönnegård, Lars; Fikse, W Freddy; Veerkamp, Roel F; Strandberg, Erling
2013-07-04
Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike's information criterion using h-likelihood to select the best fitting model. We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike's information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike's information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring.
Measurement of academic entitlement.
Miller, Brian K
2013-10-01
Members of Generation Y, or Millennials, have been accused of being lazy, whiny, pampered, and entitled, particularly in the college classroom. Using an equity theory framework, eight items from a measure of work entitlement were adapted to measure academic entitlement in a university setting in three independent samples. In Study 1 (n = 229), confirmatory factor analyses indicated good model fit to a unidimensional structure for the data. In Study 2 (n = 200), the questionnaire predicted unique variance in university satisfaction beyond two more general measures of dispositional entitlement. In Study 3 (n = 161), the measure predicted unique variance in perceptions of grade fairness beyond that which was predicted by another measure of academic entitlement. This analysis provides evidence of discriminant, convergent, incremental, concurrent criterion-related, and construct validity for the Academic Equity Preference Questionnaire.
Donti, Olyvia; Bogdanis, Gregory C; Kritikou, Maria; Donti, Anastasia; Theodorakou, Kalliopi
2016-06-01
This study examined the association between physical fitness and a technical execution score in rhythmic gymnasts varying in the performance level. Forty-six young rhythmic gymnasts (age: 9.9 ±1.3 years) were divided into two groups (qualifiers, n=24 and non-qualifiers, n=22) based on the results of the National Championships. Gymnasts underwent a series of physical fitness tests and technical execution was evaluated in a routine without apparatus. There were significant differences between qualifiers and non-qualifiers in the technical execution score (p=0.01, d=1.0), shoulder flexion (p=0.01, d=0.8), straight leg raise (p=0.004, d=0.9), sideways leg extension (p=0.002, d=0.9) and body fat (p=.021, d=0.7), but no differences were found in muscular endurance and jumping performance. The technical execution score for the non-qualifiers was significantly correlated with shoulder extension (r=0.423, p<0.05), sideways leg extension (r=0.687, p<0.01), push ups (r=0.437, p<0.05) and body fat (r=0.642, p<0.01), while there was only one significant correlation with sideways leg extension (r=0.467, p<0.05) for the qualifiers. Multiple regression analysis revealed that sideways leg extension, body fat, and push ups accounted for a large part (62.9%) of the variance in the technical execution score for the non-qualifiers, while for the qualifiers, only 37.3% of the variance in the technical execution score was accounted for by sideways leg extension and spine flexibility. In conclusion, flexibility and body composition can effectively discriminate between qualifiers and non-qualifiers in youth rhythmic gymnastics. At the lower level of performance (non-qualifiers), physical fitness seems to have a greater effect on the technical execution score.
Chao, Lin; Rang, Camilla Ulla; Proenca, Audrey Menegaz; Chao, Jasper Ubirajara
2016-01-01
Non-genetic phenotypic variation is common in biological organisms. The variation is potentially beneficial if the environment is changing. If the benefit is large, selection can favor the evolution of genetic assimilation, the process by which the expression of a trait is transferred from environmental to genetic control. Genetic assimilation is an important evolutionary transition, but it is poorly understood because the fitness costs and benefits of variation are often unknown. Here we show that the partitioning of damage by a mother bacterium to its two daughters can evolve through genetic assimilation. Bacterial phenotypes are also highly variable. Because gene-regulating elements can have low copy numbers, the variation is attributed to stochastic sampling. Extant Escherichia coli partition asymmetrically and deterministically more damage to the old daughter, the one receiving the mother’s old pole. By modeling in silico damage partitioning in a population, we show that deterministic asymmetry is advantageous because it increases fitness variance and hence the efficiency of natural selection. However, we find that symmetrical but stochastic partitioning can be similarly beneficial. To examine why bacteria evolved deterministic asymmetry, we modeled the effect of damage anchored to the mother’s old pole. While anchored damage strengthens selection for asymmetry by creating additional fitness variance, it has the opposite effect on symmetry. The difference results because anchored damage reinforces the polarization of partitioning in asymmetric bacteria. In symmetric bacteria, it dilutes the polarization. Thus, stochasticity alone may have protected early bacteria from damage, but deterministic asymmetry has evolved to be equally important in extant bacteria. We estimate that 47% of damage partitioning is deterministic in E. coli. We suggest that the evolution of deterministic asymmetry from stochasticity offers an example of Waddington’s genetic assimilation. Our model is able to quantify the evolution of the assimilation because it characterizes the fitness consequences of variation. PMID:26761487
Chao, Lin; Rang, Camilla Ulla; Proenca, Audrey Menegaz; Chao, Jasper Ubirajara
2016-01-01
Non-genetic phenotypic variation is common in biological organisms. The variation is potentially beneficial if the environment is changing. If the benefit is large, selection can favor the evolution of genetic assimilation, the process by which the expression of a trait is transferred from environmental to genetic control. Genetic assimilation is an important evolutionary transition, but it is poorly understood because the fitness costs and benefits of variation are often unknown. Here we show that the partitioning of damage by a mother bacterium to its two daughters can evolve through genetic assimilation. Bacterial phenotypes are also highly variable. Because gene-regulating elements can have low copy numbers, the variation is attributed to stochastic sampling. Extant Escherichia coli partition asymmetrically and deterministically more damage to the old daughter, the one receiving the mother's old pole. By modeling in silico damage partitioning in a population, we show that deterministic asymmetry is advantageous because it increases fitness variance and hence the efficiency of natural selection. However, we find that symmetrical but stochastic partitioning can be similarly beneficial. To examine why bacteria evolved deterministic asymmetry, we modeled the effect of damage anchored to the mother's old pole. While anchored damage strengthens selection for asymmetry by creating additional fitness variance, it has the opposite effect on symmetry. The difference results because anchored damage reinforces the polarization of partitioning in asymmetric bacteria. In symmetric bacteria, it dilutes the polarization. Thus, stochasticity alone may have protected early bacteria from damage, but deterministic asymmetry has evolved to be equally important in extant bacteria. We estimate that 47% of damage partitioning is deterministic in E. coli. We suggest that the evolution of deterministic asymmetry from stochasticity offers an example of Waddington's genetic assimilation. Our model is able to quantify the evolution of the assimilation because it characterizes the fitness consequences of variation.
NASA Astrophysics Data System (ADS)
Plaza Garcia, Maria Asuncion
The rampant success of quantum theory is the result of applications of the 'new' quantum mechanics of Schrodinger and Heisenberg (1926-7), the Feynman-Schwinger-Tomonaga Quantum Electro-dynamics (1946-51), the electro-weak theory of Salaam, Weinberg, and Glashow (1967-9), and Quantum Chromodynamics (1973-); in fact, this success of 'the' quantum theory has depended on a continuous stream of brilliant and quite disparate mathematical formulations. In this carefully concealed ferment there lie plenty of unresolved difficulties, simply because in churning out fabulously accurate calculational tools there has been no sensible explanation of all that is going on. It is even argued that such an understanding is nothing to do with physics. A long-standing and famous illustration of this is the paradoxical thought-experiment of Einstein, Podolsky and Rosen (1935). Fundamental to all quantum theories, and also their paradoxes, is the location of sub-microscopic objects; or, rather, that the specification of such a location is fraught with mathematical inconsistency. This project encompasses a detailed, critical survey of the tangled history of Position within quantum theories. The first step is to show that, contrary to appearances, canonical quantum mechanics has only a vague notion of locality. After analysing a number of previous attempts at a 'relativistic quantum mechanics', two lines of thought are considered in detail. The first is the work of Wan and students, which is shown to be no real improvement on the iisu.al 'nonrelativistic' theory. The second is based on an idea of Dirac's - using backwards-in-time light-cones as the hypersurface in space-time. There remain considerable difficulties in the way of producing a consistent scheme here. To keep things nicely stirred up, the author then proposes his own approach - an adaptation of Feynman's QED propagators. This new approach is distinguished from Feynman's since the propagator or Green's function is not obtained by Feynman's rule. The type of equation solved is also different: instead of an initial-value problem, a solution that obeys a time-symmetric causality criterion is found for an inhomogeneous partial differential equation with homogeneous boundary conditions. To make the consideration of locality more precise, some results of Fourier transform theory are presented in a form that is directly applicable. Somewhat away from the main thrust of the thesis, there is also an attempt to explain, the manner in which quantum effects disappear as the number of particles increases in such things as experimental realisations of the EPR and de Broglie thought experiments.
BOOK REVIEW: Path Integrals in Field Theory: An Introduction
NASA Astrophysics Data System (ADS)
Ryder, Lewis
2004-06-01
In the 1960s Feynman was known to particle physicists as one of the people who solved the major problems of quantum electrodynamics, his contribution famously introducing what are now called Feynman diagrams. To other physicists he gained a reputation as the author of the Feynman Lectures on Physics; in addition some people were aware of his work on the path integral formulation of quantum theory, and a very few knew about his work on gravitation and Yang--Mills theories, which made use of path integral methods. Forty years later the scene is rather different. Many of the problems of high energy physics are solved; and the standard model incorporates Feynman's path integral method as a way of proving the renormalisability of the gauge (Yang--Mills) theories involved. Gravitation is proving a much harder nut to crack, but here also questions of renormalisability are couched in path-integral language. What is more, theoretical studies of condensed matter physics now also appeal to this technique for quantisation, so the path integral method is becoming part of the standard apparatus of theoretical physics. Chapters on it appear in a number of recent books, and a few books have appeared devoted to this topic alone; the book under review is a very recent one. Path integral techniques have the advantage of enormous conceptual appeal and the great disadvantage of mathematical complexity, this being partly the result of messy integrals but more fundamentally due to the notions of functional differentiation and integration which are involved in the method. All in all this subject is not such an easy ride. Mosel's book, described as an introduction, is aimed at graduate students and research workers in particle physics. It assumes a background knowledge of quantum mechanics, both non-relativistic and relativistic. After three chapters on the path integral formulation of non-relativistic quantum mechanics there are eight chapters on scalar and spinor field theory, followed by three on gauge field theories---quantum electrodynamics and Yang--Mills theories, Faddeev--Popov ghosts and so on.There is no treatment of the quantisation of gravity.Thus in about 200 pages the reader has the chance to learn in some detail about a most important area of modern physics. The subject is tough but the style is clear and pedagogic, results for the most part being derived explicitly. The choice of topics included is main-stream and sensible and one has a clear sense that the author knows where he is going and is a reliable guide. Path Integrals in Field Theory is clearly the work of a man with considerable teaching experience and is recommended as a readable and helpful account of a rather non-trivial subject.
Sexual dimorphism is associated with population fitness in the seed beetle Callosobruchus maculatus.
Rankin, Daniel J; Arnqvist, Göran
2008-03-01
The population consequences of sexual selection remain empirically unexplored. Comparative studies, involving extinction risk, have yielded different results as to the effect of sexual selection on population densities make contrasting predictions. Here, we investigate the relationship between sexual dimorphism (SD) and population productivity in the seed beetle Callosobruchus maculatus, using 13 populations that have evolved in isolation. Geometric morphometric methods and image analysis are employed to form integrative measures of sexual dimorphism, composed of variation in weight, size, body shape, and pigmentation. We found a positive relationship between SD and adult fitness (net adult offspring production) across our study populations, but failed to find any association between SD and juvenile fitness (egg-to-adult survival). Several mechanisms may have contributed to the pattern found, and variance in sexual selection regimes across populations, either in female choice for "good genes" or in the magnitude of direct benefits provided by their mates, would tend to produce the pattern seen. However, our results suggest that evolutionary constraints in the form of intralocus sexual conflict may have been the major generator of the relationship seen between SD and population fitness.
Fitting and Modeling in the ASC Data Analysis Environment
NASA Astrophysics Data System (ADS)
Doe, S.; Siemiginowska, A.; Joye, W.; McDowell, J.
As part of the AXAF Science Center (ASC) Data Analysis Environment, we will provide to the astronomical community a Fitting Application. We present a design of the application in this paper. Our design goal is to give the user the flexibility to use a variety of optimization techniques (Levenberg-Marquardt, maximum entropy, Monte Carlo, Powell, downhill simplex, CERN-Minuit, and simulated annealing) and fit statistics (chi (2) , Cash, variance, and maximum likelihood); our modular design allows the user easily to add their own optimization techniques and/or fit statistics. We also present a comparison of the optimization techniques to be provided by the Application. The high spatial and spectral resolutions that will be obtained with AXAF instruments require a sophisticated data modeling capability. We will provide not only a suite of astronomical spatial and spectral source models, but also the capability of combining these models into source models of up to four data dimensions (i.e., into source functions f(E,x,y,t)). We will also provide tools to create instrument response models appropriate for each observation.
Mixing in High Schmidt Number Turbulent Jets
1991-01-01
the higher Sc jet is less well mixed. The difference is less pronounced at higher Re. Flame length estimates imply either an increase in entrainment...72 8.0 Estimation of flame lengths ....................................... 74 8.1 Estim ation m...A.4) Lf flame length N number of trials (Eq. 3.1) p exponent in fits of the variance behavior with Re p probability of a binomial event (Eq. 3.1) p
Brief Lags in Interrupted Sequential Performance: Evaluating a Model and Model Evaluation Method
2015-01-05
rehearsal mechanism in the model. To evaluate the model we developed a simple new goodness-of-fit test based on analysis of variance that offers an...repeated step). Sequen- tial constraints are common in medicine, equipment maintenance, computer programming and technical support, data analysis ...legal analysis , accounting, and many other home and workplace environ- ments. Sequential constraints also play a role in such basic cognitive processes
NASA Astrophysics Data System (ADS)
Kelleher, Christa A.; Shaw, Stephen B.
2018-02-01
Recent research has found that hydrologic modeling over decadal time periods often requires time variant model parameters. Most prior work has focused on assessing time variance in model parameters conceptualizing watershed features and functions. In this paper, we assess whether adding a time variant scalar to potential evapotranspiration (PET) can be used in place of time variant parameters. Using the HBV hydrologic model and four different simple but common PET methods (Hamon, Priestly-Taylor, Oudin, and Hargreaves), we simulated 60+ years of daily discharge on four rivers in New York state. Allowing all ten model parameters to vary in time achieved good model fits in terms of daily NSE and long-term water balance. However, allowing single model parameters to vary in time - including a scalar on PET - achieved nearly equivalent model fits across PET methods. Overall, varying a PET scalar in time is likely more physically consistent with known biophysical controls on PET as compared to varying parameters conceptualizing innate watershed properties related to soil properties such as wilting point and field capacity. This work suggests that the seeming need for time variance in innate watershed parameters may be due to overly simple evapotranspiration formulations that do not account for all factors controlling evapotranspiration over long time periods.
On the scaling of velocity and vorticity variances in turbulent channel flow
NASA Astrophysics Data System (ADS)
Leonard, A.
2015-11-01
The availability of new DNS-based statistics for turbulent channel flow (Lee & Moser, JFM 2015) along with previous results (e.g., Hoyas & Jiménez, Phys. Flu. 2006) has provided the opportunity for another look at the scaling laws for this flow. For example, data from the former (fig. 4(e)) for the streamwise velocity variance in the outer region clearly indicate a modified log law for that quantity at Reτ = 5200 , i.e., + =C0 -C1 ln (y / δ) -C2 ln (y / δ)2 where δ is the channel half height. We find that this result fits the the data very well for 0 . 1 < y / δ < 0 . 8 . The Reynolds number (5200) is still apparently too low to observe the much-discussed log law (above with C2 = 0), which, presumably, would appear for roughly y / δ < 0 . 1 , as it does in high Reτ pipe flow (Hultmark et al., PRL 2012) with δ replaced by R. On the other hand, the above modified log law with the same values for C1 and C2 is a good fit for the pipe data at Reτ = 98 ×105 for y / R > 0 . 12 (fig. 4 of Hultmark et al.).
A behavioral-genetic investigation of bulimia nervosa and its relationship with alcohol use disorder
Trace, Sara Elizabeth; Thornton, Laura Marie; Baker, Jessica Helen; Root, Tammy Lynn; Janson, Lauren Elizabeth; Lichtenstein, Paul; Pedersen, Nancy Lee; Bulik, Cynthia Marie
2013-01-01
Bulimia nervosa (BN) and alcohol use disorder (AUD) frequently co-occur and may share genetic factors; however, the nature of their association is not fully understood. We assessed the extent to which the same genetic and environmental factors contribute to liability to BN and AUD. A bivariate structural equation model using a Cholesky decomposition was fit to data from 7,241 women who participated in the Swedish Twin study of Adults: Genes and Environment. The proportion of variance accounted for by genetic and environmental factors for BN and AUD and the genetic and environmental correlations between these disorders were estimated. In the best-fitting model, the heritability estimates were 0.55 (95% CI: 0.37; 0.70) for BN and 0.62 (95% CI: 0.54; 0.70) for AUD. Unique environmental factors accounted for the remainder of variance for BN. The genetic correlation between BN and AUD was 0.23 (95% CI: 0.01; 0.44), and the correlation between the unique environmental factors for the two disorders was 0.35 (95% CI: 0.08; 0.61), suggesting moderate overlap in these factors. Findings from this investigation provide additional support that some of the same genetic factors may influence liability to both BN and AUD. PMID:23790978
Chilcot, Joseph; Norton, Sam; Wellsted, David; Almond, Mike; Davenport, Andrew; Farrington, Ken
2011-09-01
We sought to examine several competing factor structures of the Beck Depression Inventory-II (BDI) in a sample of patients with End-Stage Renal Disease (ESRD), in which setting the factor structure is poorly defined, though depression symptoms are common. In addition, demographic and clinical correlates of the identified factors were examined. The BDI was administered to clinical sample of 460 ESRD patients attending 4 UK renal centres. Competing models of the factor structure of the BDI were evaluated using confirmatory factor analysis. The best fitting model consisted of general depression factor that accounted for 81% of the common variance between all items along with orthogonal cognitive and somatic factors (G-S-C model, CFI=.983, TLI=.979, RMSEA=.037), which explained 8% and 9% of the common variance, respectively. Age, diabetes, and ethnicity were significantly related to the cognitive factor, whereas albumin, dialysis adequacy, and ethnicity were related to the somatic factor. No demographic or clinical variable was associated with the general factor. The general-factor model provides the best fitting and conceptually most acceptable interpretation of the BDI. Furthermore, the cognitive and somatic factors appear to be related to specific demographic and clinical factors. Copyright © 2011 Elsevier Inc. All rights reserved.
How Neglect and Punitiveness Influence Emotion Knowledge
Sullivan, Margaret Wolan; Carmody, Dennis P.
2010-01-01
To explore whether punitive parenting styles contribute to early-acquired emotion knowledge deficits observable in neglected children, we observed 42 preschool children’s emotion knowledge, expression recognition time, and IQ. The children’s mothers completed the Parent–Child Conflict Tactics Scales to assess the recent use of three types of discipline strategies (nonviolent, physically punitive, and psychological aggression), as well as neglectful parenting. Fifteen of the children were identified as neglected by Child Protective Services (CPS) reports; 27 children had no record of CPS involvement and served as the comparison group. There were no differences between the neglect and comparison groups in the demographic factors of gender, age, home language, minority status, or public assistance, nor on IQ. Hierarchical multiple regression modeling showed that neglect significantly predicted emotion knowledge. The addition of IQ contributed a significant amount of additional variance to the model and maintained the fit. Adding parental punitiveness in the final stage contributed little additional variance and did not significantly improve the fit. Thus, deficits in children’s emotion knowledge may be due primarily to lower IQ or neglect. IQ was unrelated to speed of emotion recognition. Punitiveness did not directly contribute to emotion knowledge deficits but appeared in exploratory analysis to be related to speed of emotion recognition. PMID:20099078
Martinez, Victor; Bünger, Lutz; Hill, William G
2000-01-01
Data were analysed from a divergent selection experiment for an indicator of body composition in the mouse, the ratio of gonadal fat pad to body weight (GFPR). Lines were selected for 20 generations for fat (F), lean (L) or were unselected (C), with three replicates of each. Selection was within full-sib families, 16 families per replicate for the first seven generations, eight subsequently. At generation 20, GFPR in the F lines was twice and in the L lines half that of C. A log transformation removed both asymmetry of response and heterogeneity of variance among lines, and so was used throughout. Estimates of genetic variance and heritability (approximately 50%) obtained using REML with an animal model were very similar, whether estimated from the first few generations of selection, or from all 20 generations, or from late generations having fitted pedigree. The estimates were also similar when estimated from selected or control lines. Estimates from REML also agreed with estimates of realised heritability. The results all accord with expectations under the infinitesimal model, despite the four-fold changes in mean. Relaxed selection lines, derived from generation 20, showed little regression in fatness after 40 generations without selection. PMID:14736404
The Impact of Truth Surrogate Variance on Quality Assessment/Assurance in Wind Tunnel Testing
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2016-01-01
Minimum data volume requirements for wind tunnel testing are reviewed and shown to depend on error tolerance, response model complexity, random error variance in the measurement environment, and maximum acceptable levels of inference error risk. Distinctions are made between such related concepts as quality assurance and quality assessment in response surface modeling, as well as between precision and accuracy. Earlier research on the scaling of wind tunnel tests is extended to account for variance in the truth surrogates used at confirmation sites in the design space to validate proposed response models. A model adequacy metric is presented that represents the fraction of the design space within which model predictions can be expected to satisfy prescribed quality specifications. The impact of inference error on the assessment of response model residuals is reviewed. The number of sites where reasonably well-fitted response models actually predict inadequately is shown to be considerably less than the number of sites where residuals are out of tolerance. The significance of such inference error effects on common response model assessment strategies is examined.
EGSIEM combination service: combination of GRACE monthly K-band solutions on normal equation level
NASA Astrophysics Data System (ADS)
Meyer, Ulrich; Jean, Yoomin; Arnold, Daniel; Jäggi, Adrian
2017-04-01
The European Gravity Service for Improved Emergency Management (EGSIEM) project offers a scientific combination service, combining for the first time monthly GRACE gravity fields of different analysis centers (ACs) on normal equation (NEQ) level and thus taking all correlations between the gravity field coefficients and pre-eliminated orbit and instrument parameters correctly into account. Optimal weights for the individual NEQs are commonly derived by variance component estimation (VCE), as is the case for the products of the International VLBI Service (IVS) or the DTRF2008 reference frame realisation that are also derived by combination on NEQ-level. But variance factors are based on post-fit residuals and strongly depend on observation sampling and noise modeling, which both are very diverse in case of the individual EGSIEM ACs. These variance factors do not necessarily represent the true error levels of the estimated gravity field parameters that are still governed by analysis noise. We present a combination approach where weights are derived on solution level, thereby taking the analysis noise into account.
Five for Sydney--A Journey through Science
ERIC Educational Resources Information Center
Lam, Stephen
2014-01-01
What is science? Depending on who is asked, it may mean the pursuit of knowledge, explanations of the everyday world, a difficult subject at school, or a field populated by larger than life characters such as Einstein, Feynman, or Hawking. For the author, science has been and remains an unexpected journey, an adventure and an ever-changing career.…
Planck's Constant as a Natural Unit of Measurement
ERIC Educational Resources Information Center
Quincey, Paul
2013-01-01
The proposed revision of SI units would embed Planck's constant into the definition of the kilogram, as a fixed constant of nature. Traditionally, Planck's constant is not readily interpreted as the size of something physical, and it is generally only encountered by students in the mathematics of quantum physics. Richard Feynman's…
Feynman Path Integral Approach to Electron Diffraction for One and Two Slits: Analytical Results
ERIC Educational Resources Information Center
Beau, Mathieu
2012-01-01
In this paper we present an analytic solution of the famous problem of diffraction and interference of electrons through one and two slits (for simplicity, only the one-dimensional case is considered). In addition to exact formulae, various approximations of the electron distribution are shown which facilitate the interpretation of the results.…
Perturbative Yang-Mills theory without Faddeev-Popov ghost fields
NASA Astrophysics Data System (ADS)
Huffel, Helmuth; Markovic, Danijel
2018-05-01
A modified Faddeev-Popov path integral density for the quantization of Yang-Mills theory in the Feynman gauge is discussed, where contributions of the Faddeev-Popov ghost fields are replaced by multi-point gauge field interactions. An explicit calculation to O (g2) shows the equivalence of the usual Faddeev-Popov scheme and its modified version.
Exploring the Standard Model of Particles
ERIC Educational Resources Information Center
Johansson, K. E.; Watkins, P. M.
2013-01-01
With the recent discovery of a new particle at the CERN Large Hadron Collider (LHC) the Higgs boson could be about to be discovered. This paper provides a brief summary of the standard model of particle physics and the importance of the Higgs boson and field in that model for non-specialists. The role of Feynman diagrams in making predictions for…
Developing a Framework for Analyzing Definitions: A Study of "The Feynman Lectures"
ERIC Educational Resources Information Center
Wong, Chee Leong; Chu, Hye-Eun; Yap, Kueh Chin
2014-01-01
One important purpose of a definition is to explain the meaning of a word. Any problems associated with a definition may impede students' learning. However, research studies on the definitional problems from the perspective of physics education are limited. Physics educators may not be aware of the nature and extent of definitional problems.…
Critique and Fiction: Doing Science Right in Rural Education Research
ERIC Educational Resources Information Center
Howley, Craig B.
2006-01-01
This essay explains the relevance of critique in rural education to novels about rural places. The most important quoted passage in the essay is from the noted physicist Richard Feynman: "Science is the belief in the ignorance of experts." Novelist-physicist C. P. Snow, historian Henry Adams, and poet and student-of-mathematics Kelly Cherry also…
Bayesian inversions of a dynamic vegetation model in four European grassland sites
NASA Astrophysics Data System (ADS)
Minet, J.; Laloy, E.; Tychon, B.; François, L.
2015-01-01
Eddy covariance data from four European grassland sites are used to probabilistically invert the CARAIB dynamic vegetation model (DVM) with ten unknown parameters, using the DREAM(ZS) Markov chain Monte Carlo (MCMC) sampler. We compare model inversions considering both homoscedastic and heteroscedastic eddy covariance residual errors, with variances either fixed a~priori or jointly inferred with the model parameters. Agreements between measured and simulated data during calibration are comparable with previous studies, with root-mean-square error (RMSE) of simulated daily gross primary productivity (GPP), ecosystem respiration (RECO) and evapotranspiration (ET) ranging from 1.73 to 2.19 g C m-2 day-1, 1.04 to 1.56 g C m-2 day-1, and 0.50 to 1.28 mm day-1, respectively. In validation, mismatches between measured and simulated data are larger, but still with Nash-Sutcliffe efficiency scores above 0.5 for three out of the four sites. Although measurement errors associated with eddy covariance data are known to be heteroscedastic, we showed that assuming a classical linear heteroscedastic model of the residual errors in the inversion do not fully remove heteroscedasticity. Since the employed heteroscedastic error model allows for larger deviations between simulated and measured data as the magnitude of the measured data increases, this error model expectedly lead to poorer data fitting compared to inversions considering a constant variance of the residual errors. Furthermore, sampling the residual error variances along with model parameters results in overall similar model parameter posterior distributions as those obtained by fixing these variances beforehand, while slightly improving model performance. Despite the fact that the calibrated model is generally capable of fitting the data within measurement errors, systematic bias in the model simulations are observed. These are likely due to model inadequacies such as shortcomings in the photosynthesis modelling. Besides model behaviour, difference between model parameter posterior distributions among the four grassland sites are also investigated. It is shown that the marginal distributions of the specific leaf area and characteristic mortality time parameters can be explained by site-specific ecophysiological characteristics. Lastly, the possibility of finding a common set of parameters among the four experimental sites is discussed.
Finite mixture model: A maximum likelihood estimation approach on time series data
NASA Astrophysics Data System (ADS)
Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad
2014-09-01
Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.
Devigili, Alessandro; Evans, Jonathan P; Di Nisio, Andrea; Pilastro, Andrea
2015-09-15
In many species, females mate with multiple partners, meaning that sexual selection on male traits operates across a spectrum that encompasses the competition for mates (that is, before mating) and fertilizations (after mating). Despite being inextricably linked, pre- and postcopulatory sexual selection are typically studied independently, and we know almost nothing about how sexual selection operates across this divide. Here we bridge this knowledge gap using the livebearing fish Poecilia reticulata. We show that both selective episodes, as well as their covariance, explain a significant component of variance in male reproductive fitness. Moreover, linear and nonlinear selection simultaneously act on pre- and postcopulatory traits, and interact to generate multiple phenotypes with similar fitness.
Improving Adiponectin Levels in Individuals With Diabetes and Obesity: Insights From Look AHEAD.
Belalcazar, L Maria; Lang, Wei; Haffner, Steven M; Schwenke, Dawn C; Kriska, Andrea; Balasubramanyam, Ashok; Hoogeveen, Ron C; Pi-Sunyer, F Xavier; Tracy, Russell P; Ballantyne, Christie M
2015-08-01
This study investigated whether fitness changes resulting from lifestyle interventions for weight loss may independently contribute to the improvement of low adiponectin levels in obese individuals with diabetes. Look AHEAD (Action for Health in Diabetes) randomized overweight/obese individuals with type 2 diabetes to intensive lifestyle intervention (ILI) for weight loss or to diabetes support and education (DSE). Total and high-molecular weight adiponectin (adiponectins), weight, and cardiorespiratory fitness (submaximal exercise stress test) were measured in 1,397 participants at baseline and at 1 year, when ILI was most intense. Regression analyses examined the associations of 1-year weight and fitness changes with change in adiponectins. ILI resulted in greater improvements in weight, fitness, and adiponectins at 1 year compared with DSE (P < 0.0001). Weight loss and improved fitness were each associated with changes in adiponectins in men and women (P < 0.001 for all), after adjusting for baseline adiponectins, demographics, clinical variables, and treatment arm. Weight loss contributed an additional 4-5% to the variance of change in adiponectins than did increased fitness in men; in women, the contributions of improved fitness (1% greater) and of weight loss were similar. When weight and fitness changes were both accounted for, weight loss in men and increased fitness in women retained their strong associations (P < 0.0001) with adiponectin change. Improvements in fitness and weight with ILI were favorably but distinctly associated with changes in adiponectin levels in overweight/obese men and women with diabetes. Future studies need to investigate whether sex-specific biological determinants contribute to the observed associations. © 2015 by the American Diabetes Association. Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered.
Disease Mapping of Zero-excessive Mesothelioma Data in Flanders
Neyens, Thomas; Lawson, Andrew B.; Kirby, Russell S.; Nuyts, Valerie; Watjou, Kevin; Aregay, Mehreteab; Carroll, Rachel; Nawrot, Tim S.; Faes, Christel
2016-01-01
Purpose To investigate the distribution of mesothelioma in Flanders using Bayesian disease mapping models that account for both an excess of zeros and overdispersion. Methods The numbers of newly diagnosed mesothelioma cases within all Flemish municipalities between 1999 and 2008 were obtained from the Belgian Cancer Registry. To deal with overdispersion, zero-inflation and geographical association, the hurdle combined model was proposed, which has three components: a Bernoulli zero-inflation mixture component to account for excess zeros, a gamma random effect to adjust for overdispersion and a normal conditional autoregressive random effect to attribute spatial association. This model was compared with other existing methods in literature. Results The results indicate that hurdle models with a random effects term accounting for extra-variance in the Bernoulli zero-inflation component fit the data better than hurdle models that do not take overdispersion in the occurrence of zeros into account. Furthermore, traditional models that do not take into account excessive zeros but contain at least one random effects term that models extra-variance in the counts have better fits compared to their hurdle counterparts. In other words, the extra-variability, due to an excess of zeros, can be accommodated by spatially structured and/or unstructured random effects in a Poisson model such that the hurdle mixture model is not necessary. Conclusions Models taking into account zero-inflation do not always provide better fits to data with excessive zeros than less complex models. In this study, a simple conditional autoregressive model identified a cluster in mesothelioma cases near a former asbestos processing plant (Kapelle-op-den-Bos). This observation is likely linked with historical local asbestos exposures. Future research will clarify this. PMID:27908590
Loneliness and Schizotypy Are Distinct Constructs, Separate from General Psychopathology.
Badcock, Johanna C; Barkus, Emma; Cohen, Alex S; Bucks, Romola; Badcock, David R
2016-01-01
Loneliness is common in youth and associated with a significantly increased risk of psychological disorders. Although loneliness is strongly associated with psychosis, its relationship with psychosis proneness is unclear. Our aim in this paper was to test the hypothesis that loneliness and schizotypal traits, conveying risk for schizophrenia spectrum disorders, are similar but separate constructs. Pooling data from two non-clinical student samples (N = 551) we modeled the structure of the relationship between loneliness and trait schizotypy. Loneliness was assessed with the University of California, Los Angeles Loneliness Scale (UCLA-3), whilst negative (Social Anhedonia) and positive (Perceptual Aberrations) schizotypal traits were assessed with the Wisconsin Schizotypy Scales-Brief (WSS-B). Fit statistics indicated that the best fitting model of UCLA-3 scores comprises three correlated factors (Isolation, Related Connectedness, and Collective Connectedness), consistent with previous reports. Fit statistics for a two factor model of positive and negative schizotypy were excellent. Next, bi-factor analysis was used to model a general psychopatholgy factor (p) across the three loneliness factors and separate negative and positive schizotypy traits. The results showed that all items (except 1) co-loaded on p. However, with the influence of p removed, additional variance remained within separate sub-factors, indicating that loneliness and negative and positive trait schizotypy are distinct and separable constructs. Similarly, once shared variance with p was removed, correlations between sub-factors of loneliness and schizotypal traits were non-significant. These findings have important clinical implications since they suggest that loneliness should not be conflated with the expression of schizotypy. Rather, loneliness needs to be specifically targeted for assessment and treatment in youth at risk for psychosis.
Statistical modelling of thermal annealing of fission tracks in apatite
NASA Astrophysics Data System (ADS)
Laslett, G. M.; Galbraith, R. F.
1996-12-01
We develop an improved methodology for modelling the relationship between mean track length, temperature, and time in fission track annealing experiments. We consider "fanning Arrhenius" models, in which contours of constant mean length on an Arrhenius plot are straight lines meeting at a common point. Features of our approach are explicit use of subject matter knowledge, treating mean length as the response variable, modelling of the mean-variance relationship with two components of variance, improved modelling of the control sample, and using information from experiments in which no tracks are seen. This approach overcomes several weaknesses in previous models and provides a robust six parameter model that is widely applicable. Estimation is via direct maximum likelihood which can be implemented using a standard numerical optimisation package. Because the model is highly nonlinear, some reparameterisations are needed to achieve stable estimation and calculation of precisions. Experience suggests that precisions are more convincingly estimated from profile log-likelihood functions than from the information matrix. We apply our method to the B-5 and Sr fluorapatite data of Crowley et al. (1991) and obtain well-fitting models in both cases. For the B-5 fluorapatite, our model exhibits less fanning than that of Crowley et al. (1991), although fitted mean values above 12 μm are fairly similar. However, predictions can be different, particularly for heavy annealing at geological time scales, where our model is less retentive. In addition, the refined error structure of our model results in tighter prediction errors, and has components of error that are easier to verify or modify. For the Sr fluorapatite, our fitted model for mean lengths does not differ greatly from that of Crowley et al. (1991), but our error structure is quite different.
Directional Migration of Recirculating Lymphocytes through Lymph Nodes via Random Walks
Thomas, Niclas; Matejovicova, Lenka; Srikusalanukul, Wichat; Shawe-Taylor, John; Chain, Benny
2012-01-01
Naive T lymphocytes exhibit extensive antigen-independent recirculation between blood and lymph nodes, where they may encounter dendritic cells carrying cognate antigen. We examine how long different T cells may spend in an individual lymph node by examining data from long term cannulation of blood and efferent lymphatics of a single lymph node in the sheep. We determine empirically the distribution of transit times of migrating T cells by applying the Least Absolute Shrinkage & Selection Operator () or regularised to fit experimental data describing the proportion of labelled infused cells in blood and efferent lymphatics over time. The optimal inferred solution reveals a distribution with high variance and strong skew. The mode transit time is typically between 10 and 20 hours, but a significant number of cells spend more than 70 hours before exiting. We complement the empirical machine learning based approach by modelling lymphocyte passage through the lymph node . On the basis of previous two photon analysis of lymphocyte movement, we optimised distributions which describe the transit times (first passage times) of discrete one dimensional and continuous (Brownian) three dimensional random walks with drift. The optimal fit is obtained when drift is small, i.e. the ratio of probabilities of migrating forward and backward within the node is close to one. These distributions are qualitatively similar to the inferred empirical distribution, with high variance and strong skew. In contrast, an optimised normal distribution of transit times (symmetrical around mean) fitted the data poorly. The results demonstrate that the rapid recirculation of lymphocytes observed at a macro level is compatible with predominantly randomised movement within lymph nodes, and significant probabilities of long transit times. We discuss how this pattern of migration may contribute to facilitating interactions between low frequency T cells and antigen presenting cells carrying cognate antigen. PMID:23028891
Harmsen, Wouter J; Ribbers, Gerard M; Slaman, Jorrit; Heijenbrok-Kal, Majanka H; Khajeh, Ladbon; van Kooten, Fop; Neggers, Sebastiaan J C M M; van den Berg-Emons, Rita J
2017-05-01
Peak oxygen uptake (VO 2peak ) established during progressive cardiopulmonary exercise testing (CPET) is the "gold-standard" for cardiorespiratory fitness. However, CPET measurements may be limited in patients with aneurysmal subarachnoid hemorrhage (a-SAH) by disease-related complaints, such as cardiovascular health-risks or anxiety. Furthermore, CPET with gas-exchange analyses require specialized knowledge and infrastructure with limited availability in most rehabilitation facilities. To determine whether an easy-to-administer six-minute walk test (6MWT) is a valid clinical alternative to progressive CPET in order to predict VO 2peak in individuals with a-SAH. Twenty-seven patients performed the 6MWT and CPET with gas-exchange analyses on a cycle ergometer. Univariate and multivariate regression models were made to investigate the predictability of VO 2peak from the six-minute walk distance (6MWD). Univariate regression showed that the 6MWD was strongly related to VO 2peak (r = 0.75, p < 0.001), with an explained variance of 56% and a prediction error of 4.12 ml/kg/min, representing 18% of mean VO 2peak . Adding age and sex to an extended multivariate regression model improved this relationship (r = 0.82, p < 0.001), with an explained variance of 67% and a prediction error of 3.67 ml/kg/min corresponding to 16% of mean VO 2peak . The 6MWT is an easy-to-administer submaximal exercise test that can be selected to estimate cardiorespiratory fitness at an aggregated level, in groups of patients with a-SAH, which may help to evaluate interventions in a clinical or research setting. However, the relatively large prediction error does not allow for an accurate prediction in individual patients.
Disease mapping of zero-excessive mesothelioma data in Flanders.
Neyens, Thomas; Lawson, Andrew B; Kirby, Russell S; Nuyts, Valerie; Watjou, Kevin; Aregay, Mehreteab; Carroll, Rachel; Nawrot, Tim S; Faes, Christel
2017-01-01
To investigate the distribution of mesothelioma in Flanders using Bayesian disease mapping models that account for both an excess of zeros and overdispersion. The numbers of newly diagnosed mesothelioma cases within all Flemish municipalities between 1999 and 2008 were obtained from the Belgian Cancer Registry. To deal with overdispersion, zero inflation, and geographical association, the hurdle combined model was proposed, which has three components: a Bernoulli zero-inflation mixture component to account for excess zeros, a gamma random effect to adjust for overdispersion, and a normal conditional autoregressive random effect to attribute spatial association. This model was compared with other existing methods in literature. The results indicate that hurdle models with a random effects term accounting for extra variance in the Bernoulli zero-inflation component fit the data better than hurdle models that do not take overdispersion in the occurrence of zeros into account. Furthermore, traditional models that do not take into account excessive zeros but contain at least one random effects term that models extra variance in the counts have better fits compared to their hurdle counterparts. In other words, the extra variability, due to an excess of zeros, can be accommodated by spatially structured and/or unstructured random effects in a Poisson model such that the hurdle mixture model is not necessary. Models taking into account zero inflation do not always provide better fits to data with excessive zeros than less complex models. In this study, a simple conditional autoregressive model identified a cluster in mesothelioma cases near a former asbestos processing plant (Kapelle-op-den-Bos). This observation is likely linked with historical local asbestos exposures. Future research will clarify this. Copyright © 2016 Elsevier Inc. All rights reserved.
Problems of allometric scaling analysis: examples from mammalian reproductive biology.
Martin, Robert D; Genoud, Michel; Hemelrijk, Charlotte K
2005-05-01
Biological scaling analyses employing the widely used bivariate allometric model are beset by at least four interacting problems: (1) choice of an appropriate best-fit line with due attention to the influence of outliers; (2) objective recognition of divergent subsets in the data (allometric grades); (3) potential restrictions on statistical independence resulting from phylogenetic inertia; and (4) the need for extreme caution in inferring causation from correlation. A new non-parametric line-fitting technique has been developed that eliminates requirements for normality of distribution, greatly reduces the influence of outliers and permits objective recognition of grade shifts in substantial datasets. This technique is applied in scaling analyses of mammalian gestation periods and of neonatal body mass in primates. These analyses feed into a re-examination, conducted with partial correlation analysis, of the maternal energy hypothesis relating to mammalian brain evolution, which suggests links between body size and brain size in neonates and adults, gestation period and basal metabolic rate. Much has been made of the potential problem of phylogenetic inertia as a confounding factor in scaling analyses. However, this problem may be less severe than suspected earlier because nested analyses of variance conducted on residual variation (rather than on raw values) reveals that there is considerable variance at low taxonomic levels. In fact, limited divergence in body size between closely related species is one of the prime examples of phylogenetic inertia. One common approach to eliminating perceived problems of phylogenetic inertia in allometric analyses has been calculation of 'independent contrast values'. It is demonstrated that the reasoning behind this approach is flawed in several ways. Calculation of contrast values for closely related species of similar body size is, in fact, highly questionable, particularly when there are major deviations from the best-fit line for the scaling relationship under scrutiny.
NASA Astrophysics Data System (ADS)
Pan, Zhuokun; Huang, Jingfeng; Wang, Fumin
2013-12-01
Spectral feature fitting (SFF) is a commonly used strategy for hyperspectral imagery analysis to discriminate ground targets. Compared to other image analysis techniques, SFF does not secure higher accuracy in extracting image information in all circumstances. Multi range spectral feature fitting (MRSFF) from ENVI software allows user to focus on those interesting spectral features to yield better performance. Thus spectral wavelength ranges and their corresponding weights must be determined. The purpose of this article is to demonstrate the performance of MRSFF in oilseed rape planting area extraction. A practical method for defining the weighted values, the variance coefficient weight method, was proposed to set up criterion. Oilseed rape field canopy spectra from the whole growth stage were collected prior to investigating its phenological varieties; oilseed rape endmember spectra were extracted from the Hyperion image as identifying samples to be used in analyzing the oilseed rape field. Wavelength range divisions were determined by the difference between field-measured spectra and image spectra, and image spectral variance coefficient weights for each wavelength range were calculated corresponding to field-measured spectra from the closest date. By using MRSFF, wavelength ranges were classified to characterize the target's spectral features without compromising spectral profile's entirety. The analysis was substantially successful in extracting oilseed rape planting areas (RMSE ≤ 0.06), and the RMSE histogram indicated a superior result compared to a conventional SFF. Accuracy assessment was based on the mapping result compared with spectral angle mapping (SAM) and the normalized difference vegetation index (NDVI). The MRSFF yielded a robust, convincible result and, therefore, may further the use of hyperspectral imagery in precision agriculture.
The self-transcendence scale: an investigation of the factor structure among nursing home patients.
Haugan, Gørill; Rannestad, Toril; Garåsen, Helge; Hammervold, Randi; Espnes, Geir Arild
2012-09-01
Self-transcendence, the ability to expand personal boundaries in multiple ways, has been found to provide well-being. The purpose of this study was to examine the dimensionality of the Norwegian version of the Self-Transcendence Scale, which comprises 15 items. Reed's empirical nursing theory of self-transcendence provided the theoretical framework; self-transcendence includes an interpersonal, intrapersonal, transpersonal, and temporal dimension. Cross-sectional data were obtained from a sample of 202 cognitively intact elderly patients in 44 Norwegian nursing homes. Exploratory factor analysis revealed two and four internally consistent dimensions of self-transcendence, explaining 35.3% (two factors) and 50.7% (four factors) of the variance, respectively. Confirmatory factor analysis indicated that the hypothesized two- and four-factor models fitted better than the one-factor model (cx (2), root mean square error of approximation, standardized root mean square residual, normed fit index, nonnormed fit index, comparative fit index, goodness-of-fit index, and adjusted goodness-of-fit index). The findings indicate self-transcendence as a multifactorial construct; at present, we conclude that the two-factor model might be the most accurate and reasonable measure of self-transcendence. This research generates insights in the application of the widely used Self-Transcendence Scale by investigating its psychometric properties by applying a confirmatory factor analysis. It also generates new research-questions on the associations between self-transcendence and well-being.
Romain, Ahmed Jerôme; Bernard, Paquito; Hokayem, Marie; Gernigon, Christophe; Avignon, Antoine
2016-03-01
This study aimed to test three factorial structures conceptualizing the processes of change (POC) from the transtheoretical model and to examine the relationships between the POC and stages of change (SOC) among overweight and obese adults. Cross-sectional study. This study was conducted at the University Hospital of Montpellier, France. A sample of 289 overweight or obese participants (199 women) was enrolled in the study. Participants completed the POC and SOC questionnaires during a 5-day hospitalization for weight management. Structural equation modeling was used to compare the different factorial structures. The unweighted least-squares method was used to identify the best-fit indices for the five fully correlated model (goodness-of-fit statistic = .96; adjusted goodness-of-fit statistic = .95; standardized root mean residual = .062; normed-fit index = .95; parsimonious normed-fit index = .83; parsimonious goodness-of-fit statistic = .78). The multivariate analysis of variance was significant (p < .001). A post hoc test showed that individuals in advanced SOC used more of both experiential and behavioral POC than those in preaction stages, with effect sizes ranging from .06 to .29. This study supports the validity of the factorial structure of POC concerning physical activity and confirms the assumption that, in this context, people with excess weight use both experiential and behavioral processes. These preliminary results should be confirmed in a longitudinal study. © The Author(s) 2016.
Coswig, Victor S; Gentil, Paulo; Bueno, João C A; Follmer, Bruno; Marques, Vitor A; Del Vecchio, Fabrício B
2018-01-01
Among combat sports, Judo and Brazilian Jiu-Jitsu (BJJ) present elevated physical fitness demands from the high-intensity intermittent efforts. However, information regarding how metabolic and neuromuscular physical fitness is associated with technical-tactical performance in Judo and BJJ fights is not available. This study aimed to relate indicators of physical fitness with combat performance variables in Judo and BJJ. The sample consisted of Judo ( n = 16) and BJJ ( n = 24) male athletes. At the first meeting, the physical tests were applied and, in the second, simulated fights were performed for later notational analysis. The main findings indicate: (i) high reproducibility of the proposed instrument and protocol used for notational analysis in a mobile device; (ii) differences in the technical-tactical and time-motion patterns between modalities; (iii) performance-related variables are different in Judo and BJJ; and (iv) regression models based on metabolic fitness variables may account for up to 53% of the variances in technical-tactical and/or time-motion variables in Judo and up to 31% in BJJ, whereas neuromuscular fitness models can reach values up to 44 and 73% of prediction in Judo and BJJ, respectively. When all components are combined, they can explain up to 90% of high intensity actions in Judo. In conclusion, performance prediction models in simulated combat indicate that anaerobic, aerobic and neuromuscular fitness variables contribute to explain time-motion variables associated with high intensity and technical-tactical variables in Judo and BJJ fights.
Marginal and internal fits of fixed dental prostheses zirconia retainers.
Beuer, Florian; Aggstaller, Hans; Edelhoff, Daniel; Gernet, Wolfgang; Sorensen, John
2009-01-01
CAM (computer-aided manufacturing) and CAD (computer-aided design)/CAM systems facilitate the use of zirconia substructure materials for all-ceramic fixed partial dentures. This in vitro study compared the precision of fit of frameworks milled from semi-sintered zirconia blocks that were designed and machined with two CAD/CAM and one CAM system. Three-unit posterior fixed dental prostheses (FDP) (n=10) were fabricated for standardized dies by: a milling center CAD/CAM system (Etkon), a laboratory CAD/CAM system (Cerec InLab), and a laboratory CAM system (Cercon). After adaptation by a dental technician, the FDP were cemented on definitive dies, embedded and sectioned. The marginal and internal fits were measured under an optical microscope at 50x magnification. A one-way analysis of variance (ANOVA) was used to compare data (alpha=0.05). The mean (S.D.) for the marginal fit and internal fit adaptation were: 29.1 microm (14.0) and 62.7 microm (18.9) for the milling center system, 56.6 microm (19.6) and 73.5 microm (20.6) for the laboratory CAD/CAM system, and 81.4 microm (20.3) and 119.2 microm (37.5) for the laboratory CAM system. One-way ANOVA showed significant differences between systems for marginal fit (P<0.001) and internal fit (P<0.001). All systems showed marginal gaps below 120 microm and were therefore considered clinically acceptable. The CAD/CAM systems were more precise than the CAM system.
A New Goodness of Fit Test for Normality with Mean and Variance Unknown.
1981-12-01
be realized, since fewer random deviates may have to be generated in order to get consistent critical values at the desired a levels . Plotting... a - levels n * -straightforward .20 .15 .10 .05 .01 * =reflection ..... 10 * .5710 .5120 .4318 .3208 .1612 10 ** .3670 .2914 .2206 .1388 .0390 25...Population Is Cauchy Actual Population: Cauchy Statistic: Kolmogorov-Smirnov Calculation method Powers at a - levels n = straightforwar .20 .15 .10 .05
Navy Ship Names: Background for Congress
2016-09-14
Secretary considers these nominations , along with others he receives as well as his own thoughts in this matter. At appropriate times, he selects names...Research Service 16 “ nomination ” process is often fiercely contested as differing groups make the case that “their” ship name is the most fitting...and practices of the Navy for naming vessels of the Navy, and an explanation for such variances; Assesses the feasibility and advisability of
Analysis of longitudinal "time series" data in toxicology.
Cox, C; Cory-Slechta, D A
1987-02-01
Studies focusing on chronic toxicity or on the time course of toxicant effect often involve repeated measurements or longitudinal observations of endpoints of interest. Experimental design considerations frequently necessitate between-group comparisons of the resulting trends. Typically, procedures such as the repeated-measures analysis of variance have been used for statistical analysis, even though the required assumptions may not be satisfied in some circumstances. This paper describes an alternative analytical approach which summarizes curvilinear trends by fitting cubic orthogonal polynomials to individual profiles of effect. The resulting regression coefficients serve as quantitative descriptors which can be subjected to group significance testing. Randomization tests based on medians are proposed to provide a comparison of treatment and control groups. Examples from the behavioral toxicology literature are considered, and the results are compared to more traditional approaches, such as repeated-measures analysis of variance.
Kaasalainen, Karoliina; Kasila, Kirsti; Komulainen, Jyrki; Malvela, Miia; Poskiparta, Marita
2015-01-01
Insufficient physical activity (PA) and poor physical fitness are risks for several noncommunicable diseases among working-aged men. PA programs have been launched to increase activity levels in the population but working-aged men have been underrepresented in these programs. The aim of the present cross-sectional study was to evaluate validity of a short scale for psychosocial factors among Finnish working-aged men who participated in a PA campaign. The study examined also the associations between psychosocial factors and phase of PA change across fitness groups. Physical fitness was assessed with a body fitness index constructed on the basis of a handgrip test, the Polar OwnIndex Test, and body composition analysis (InBody 720). The men were classified into low (n = 162), moderate (n = 358), and high (n = 320) body fitness index groups. Psychosocial factors and self-reported phase of PA change were assessed with a questionnaire. Psychometric properties of the scale were assessed with confirmatory factor analysis and differences between phases of PA change were examined with one-way analysis of variance. The evaluated scale included factors for self-efficacy, goal setting, skills, and social support. Good physical fitness was related to better perceived self-efficacy and ability to manage one’s PA environment. Goal setting was critical for PA change at all fitness levels. Better understanding of the interactions between psychosocial factors and PA change could help in targeting PA programs to low-fit men. Further study should examine the validity of the improved psychosocial measure. PMID:26614443
Götz, Friedrich M.; Ebert, Tobias; Rentfrow, Peter J.
2018-01-01
The present study extended traditional nation-based research on person–culture–fit to the regional level. First, we examined the geographical distribution of Big Five personality traits in Switzerland. Across the 26 Swiss cantons, unique patterns were observed for all traits. For Extraversion and Neuroticism clear language divides emerged between the French- and Italian-speaking South-West vs. the German-speaking North-East. Second, multilevel modeling demonstrated that person–environment–fit in Big Five, composed of elevation (i.e., mean differences between individual profile and cantonal profile), scatter (differences in mean variances) and shape (Pearson correlations between individual and cantonal profiles across all traits; Furr, 2008, 2010), predicted the development of subjective wellbeing (i.e., life satisfaction, satisfaction with personal relationships, positive affect, negative affect) over a period of 4 years. Unexpectedly, while the effects of shape were in line with the person–environment–fit hypothesis (better fit predicted higher subjective wellbeing), the effects of scatter showed the opposite pattern, while null findings were observed for elevation. Across a series of robustness checks, the patterns for shape and elevation were consistently replicated. While that was mostly the case for scatter as well, the effects of scatter appeared to be somewhat less robust and more sensitive to the specific way fit was modeled when predicting certain outcomes (negative affect, positive affect). Distinguishing between supplementary and complementary fit may help to reconcile these findings and future research should explore whether and if so under which conditions these concepts may be applicable to the respective facets of person–culture–fit. PMID:29713299
How important are direct fitness benefits of sexual selection?
NASA Astrophysics Data System (ADS)
Møller, A. P.; Jennions, M. D.
2001-10-01
Females may choose mates based on the expression of secondary sexual characters that signal direct, material fitness benefits or indirect, genetic fitness benefits. Genetic benefits are acquired in the generation subsequent to that in which mate choice is performed, and the maintenance of genetic variation in viability has been considered a theoretical problem. Consequently, the magnitude of indirect benefits has traditionally been considered to be small. Direct fitness benefits can be maintained without consideration of mechanisms sustaining genetic variability, and they have thus been equated with the default benefits acquired by choosy females. There is, however, still debate as to whether or not males should honestly advertise direct benefits such as their willingness to invest in parental care. We use meta-analysis to estimate the magnitude of direct fitness benefits in terms of fertility, fecundity and two measures of paternal care (feeding rate in birds, hatching rate in male guarding ectotherms) based on an extensive literature survey. The mean coefficients of determination weighted by sample size were 6.3%, 2.3%, 1.3% and 23.6%, respectively. This compares to a mean weighted coefficient of determination of 1.5% for genetic viability benefits in studies of sexual selection. Thus, for several fitness components, direct benefits are only slightly more important than indirect ones arising from female choice. Hatching rate in male guarding ectotherms was by far the most important direct fitness component, explaining almost a quarter of the variance. Our analysis also shows that male sexual advertisements do not always reliably signal direct fitness benefits.