On the Relations among Regular, Equal Unique Variances, and Image Factor Analysis Models.
ERIC Educational Resources Information Center
Hayashi, Kentaro; Bentler, Peter M.
2000-01-01
Investigated the conditions under which the matrix of factor loadings from the factor analysis model with equal unique variances will give a good approximation to the matrix of factor loadings from the regular factor analysis model. Extends the results to the image factor analysis model. Discusses implications for practice. (SLD)
49 CFR 350.345 - How does a State apply for additional variances from the FMCSRs?
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 5 2010-10-01 2010-10-01 false How does a State apply for additional variances... apply for additional variances from the FMCSRs? Any State may apply to the Administrator for a variance from the FMCSRs for intrastate commerce. The variance will be granted only if the State...
Finger gnosis predicts a unique but small part of variance in initial arithmetic performance.
Wasner, Mirjam; Nuerk, Hans-Christoph; Martignon, Laura; Roesch, Stephanie; Moeller, Korbinian
2016-06-01
Recent studies indicated that finger gnosis (i.e., the ability to perceive and differentiate one's own fingers) is associated reliably with basic numerical competencies. In this study, we aimed at examining whether finger gnosis is also a unique predictor for initial arithmetic competencies at the beginning of first grade-and thus before formal math instruction starts. Therefore, we controlled for influences of domain-specific numerical precursor competencies, domain-general cognitive ability, and natural variables such as gender and age. Results from 321 German first-graders revealed that finger gnosis indeed predicted a unique and relevant but nevertheless only small part of the variance in initial arithmetic performance (∼1%-2%) as compared with influences of general cognitive ability and numerical precursor competencies. Taken together, these results substantiated the notion of a unique association between finger gnosis and arithmetic and further corroborate the theoretical idea of finger-based representations contributing to numerical cognition. However, the only small part of variance explained by finger gnosis seems to limit its relevance for diagnostic purposes. PMID:26895483
Vitezica, Zulma G.; Varona, Luis; Legarra, Andres
2013-01-01
Genomic evaluation models can fit additive and dominant SNP effects. Under quantitative genetics theory, additive or “breeding” values of individuals are generated by substitution effects, which involve both “biological” additive and dominant effects of the markers. Dominance deviations include only a portion of the biological dominant effects of the markers. Additive variance includes variation due to the additive and dominant effects of the markers. We describe a matrix of dominant genomic relationships across individuals, D, which is similar to the G matrix used in genomic best linear unbiased prediction. This matrix can be used in a mixed-model context for genomic evaluations or to estimate dominant and additive variances in the population. From the “genotypic” value of individuals, an alternative parameterization defines additive and dominance as the parts attributable to the additive and dominant effect of the markers. This approach underestimates the additive genetic variance and overestimates the dominance variance. Transforming the variances from one model into the other is trivial if the distribution of allelic frequencies is known. We illustrate these results with mouse data (four traits, 1884 mice, and 10,946 markers) and simulated data (2100 individuals and 10,000 markers). Variance components were estimated correctly in the model, considering breeding values and dominance deviations. For the model considering genotypic values, the inclusion of dominant effects biased the estimate of additive variance. Genomic models were more accurate for the estimation of variance components than their pedigree-based counterparts. PMID:24121775
Estimation of Additive, Dominance, and Imprinting Genetic Variance Using Genomic Data
Lopes, Marcos S.; Bastiaansen, John W. M.; Janss, Luc; Knol, Egbert F.; Bovenhuis, Henk
2015-01-01
Traditionally, exploration of genetic variance in humans, plants, and livestock species has been limited mostly to the use of additive effects estimated using pedigree data. However, with the development of dense panels of single-nucleotide polymorphisms (SNPs), the exploration of genetic variation of complex traits is moving from quantifying the resemblance between family members to the dissection of genetic variation at individual loci. With SNPs, we were able to quantify the contribution of additive, dominance, and imprinting variance to the total genetic variance by using a SNP regression method. The method was validated in simulated data and applied to three traits (number of teats, backfat, and lifetime daily gain) in three purebred pig populations. In simulated data, the estimates of additive, dominance, and imprinting variance were very close to the simulated values. In real data, dominance effects account for a substantial proportion of the total genetic variance (up to 44%) for these traits in these populations. The contribution of imprinting to the total phenotypic variance of the evaluated traits was relatively small (1–3%). Our results indicate a strong relationship between additive variance explained per chromosome and chromosome length, which has been described previously for other traits in other species. We also show that a similar linear relationship exists for dominance and imprinting variance. These novel results improve our understanding of the genetic architecture of the evaluated traits and shows promise to apply the SNP regression method to other traits and species, including human diseases. PMID:26438289
McGuigan, Katrina; Aguirre, J David; Blows, Mark W
2015-11-01
How new mutations contribute to genetic variation is a key question in biology. Although the evolutionary fate of an allele is largely determined by its heterozygous effect, most estimates of mutational variance and mutational effects derive from highly inbred lines, where new mutations are present in homozygous form. In an attempt to overcome this limitation, middle-class neighborhood (MCN) experiments have been used to assess the fitness effect of new mutations in heterozygous form. However, because MCN populations harbor substantial standing genetic variance, estimates of mutational variance have not typically been available from such experiments. Here we employ a modification of the animal model to analyze data from 22 generations of Drosophila serrata bred in an MCN design. Mutational heritability, measured for eight cuticular hydrocarbons, 10 wing-shape traits, and wing size in this outbred genetic background, ranged from 0.0006 to 0.006 (with one exception), a similar range to that reported from studies employing inbred lines. Simultaneously partitioning the additive and mutational variance in the same outbred population allowed us to quantitatively test the ability of mutation-selection balance models to explain the observed levels of additive and mutational genetic variance. The Gaussian allelic approximation and house-of-cards models, which assume real stabilizing selection on single traits, both overestimated the genetic variance maintained at equilibrium, but the house-of-cards model was a closer fit to the data. This analytical approach has the potential to be broadly applied, expanding our understanding of the dynamics of genetic variance in natural populations. PMID:26384357
ERIC Educational Resources Information Center
Miller, Geoffrey F.; Penke, Lars
2007-01-01
Most theories of human mental evolution assume that selection favored higher intelligence and larger brains, which should have reduced genetic variance in both. However, adult human intelligence remains highly heritable, and is genetically correlated with brain size. This conflict might be resolved by estimating the coefficient of additive genetic…
McLaughlin, Elizabeth N; Stewart, Sherry H; Taylor, Steven
2007-01-01
Anxiety sensitivity (AS) is an established cognitive risk factor for anxiety disorders. In children and adolescents, AS is usually measured with the Childhood Anxiety Sensitivity Index (CASI). Factor analytic studies suggest that the CASI is comprised of 3 lower-order factors pertaining to Physical, Psychological and Social Concerns. There has been little research on the validity of these lower-order factors. We examined the concurrent and incremental validity of the CASI and its lower-order factors in a non-clinical sample of 349 children and adolescents. CASI scores predicted symptoms of DSM-IV anxiety disorder subtypes as measured by the Spence Children's Anxiety Scale (SCAS) after accounting for variance due to State-Trait Anxiety Inventory scores. CASI Physical Concerns scores incrementally predicted scores on each of the SCAS scales, whereas scores on the Social and Psychological Concerns subscales incrementally predicted scores on conceptually related symptom scales (e.g. CASI Social Concerns scores predicted Social Phobia symptoms). Overall, this study demonstrates that there is added value in measuring AS factors in children and adolescents. PMID:18049946
Forsberg, Simon K. G.; Andreatta, Matthew E.; Huang, Xin-Yuan; Danku, John; Salt, David E.; Carlborg, Örjan
2015-01-01
Genome-wide association (GWA) analyses have generally been used to detect individual loci contributing to the phenotypic diversity in a population by the effects of these loci on the trait mean. More rarely, loci have also been detected based on variance differences between genotypes. Several hypotheses have been proposed to explain the possible genetic mechanisms leading to such variance signals. However, little is known about what causes these signals, or whether this genetic variance-heterogeneity reflects mechanisms of importance in natural populations. Previously, we identified a variance-heterogeneity GWA (vGWA) signal for leaf molybdenum concentrations in Arabidopsis thaliana. Here, fine-mapping of this association reveals that the vGWA emerges from the effects of three independent genetic polymorphisms that all are in strong LD with the markers displaying the genetic variance-heterogeneity. By revealing the genetic architecture underlying this vGWA signal, we uncovered the molecular source of a significant amount of hidden additive genetic variation or “missing heritability”. Two of the three polymorphisms underlying the genetic variance-heterogeneity are promoter variants for Molybdate transporter 1 (MOT1), and the third a variant located ~25 kb downstream of this gene. A fourth independent association was also detected ~600 kb upstream of MOT1. Use of a T-DNA knockout allele highlights Copper Transporter 6; COPT6 (AT2G26975) as a strong candidate gene for this association. Our results show that an extended LD across a complex locus including multiple functional alleles can lead to a variance-heterogeneity between genotypes in natural populations. Further, they provide novel insights into the genetic regulation of ion homeostasis in A. thaliana, and empirically confirm that variance-heterogeneity based GWA methods are a valuable tool to detect novel associations of biological importance in natural populations. PMID:26599497
Gasparini, Clelia; Devigili, Alessandro; Dosselli, Ryan; Pilastro, Andrea
2013-01-01
In polyandrous species, a male's reproductive success depends on his fertilization capability and traits enhancing competitive fertilization success will be under strong, directional selection. This leads to the prediction that these traits should show stronger condition dependence and larger genetic variance than other traits subject to weaker or stabilizing selection. While empirical evidence of condition dependence in postcopulatory traits is increasing, the comparison between sexually selected and ‘control’ traits is often based on untested assumption concerning the different strength of selection acting on these traits. Furthermore, information on selection in the past is essential, as both condition dependence and genetic variance of a trait are likely to be influenced by the pattern of selection acting historically on it. Using the guppy (Poecilia reticulata), a livebearing fish with high levels of multiple paternity, we performed three independent experiments on three ejaculate quality traits, sperm number, velocity, and size, which have been previously shown to be subject to strong, intermediate, and weak directional postcopulatory selection, respectively. First, we conducted an inbreeding experiment to determine the pattern of selection in the past. Second, we used a diet restriction experiment to estimate their level of condition dependence. Third, we used a half-sib/full-sib mating design to estimate the coefficients of additive genetic variance (CVA) underlying these traits. Additionally, using a simulated predator evasion test, we showed that both inbreeding and diet restriction significantly reduced condition. According to predictions, sperm number showed higher inbreeding depression, stronger condition dependence, and larger CVA than sperm velocity and sperm size. The lack of significant genetic correlation between sperm number and velocity suggests that the former may respond to selection independently one from other ejaculate quality traits
Huchard, E; Charmantier, A; English, S; Bateman, A; Nielsen, J F; Clutton-Brock, T
2014-09-01
Individual variation in growth is high in cooperative breeders and may reflect plastic divergence in developmental trajectories leading to breeding vs. helping phenotypes. However, the relative importance of additive genetic variance and developmental plasticity in shaping growth trajectories is largely unknown in cooperative vertebrates. This study exploits weekly sequences of body mass from birth to adulthood to investigate sources of variance in, and covariance between, early and later growth in wild meerkats (Suricata suricatta), a cooperative mongoose. Our results indicate that (i) the correlation between early growth (prior to nutritional independence) and adult mass is positive but weak, and there are frequent changes (compensatory growth) in post-independence growth trajectories; (ii) among parameters describing growth trajectories, those describing growth rate (prior to and at nutritional independence) show undetectable heritability while associated size parameters (mass at nutritional independence and asymptotic mass) are moderately heritable (0.09 ≤ h(2) < 0.3); and (iii) additive genetic effects, rather than early environmental effects, mediate the covariance between early growth and adult mass. These results reveal that meerkat growth trajectories remain plastic throughout development, rather than showing early and irreversible divergence, and that the weak effects of early growth on adult mass, an important determinant of breeding success, are partly genetic. In contrast to most cooperative invertebrates, the acquisition of breeding status is often determined after sexual maturity and strongly impacted by chance in many cooperative vertebrates, who may therefore retain the ability to adjust their morphology to environmental changes and social opportunities arising throughout their development, rather than specializing early. PMID:24962704
Nietlisbach, Pirmin; Hadfield, Jarrod D
2015-07-01
Whenever allele frequencies are unequal, nonadditive gene action contributes to additive genetic variance and therefore the resemblance between parents and offspring. The reason for this has not been easy to understand. Here, we present a new single-locus decomposition of additive genetic variance that may give greater intuition about this important result. We show that the contribution of dominant gene action to parent-offspring resemblance only depends on the degree to which the heterozygosity of parents and offspring covary. Thus, dominant gene action only contributes to additive genetic variance when heterozygosity is heritable. Under most circumstances this is the case because individuals with rare alleles are more likely to be heterozygous, and because they pass rare alleles to their offspring they also tend to have heterozygous offspring. When segregating alleles are at equal frequency there are no rare alleles, the heterozygosities of parents and offspring are uncorrelated and dominant gene action does not contribute to additive genetic variance. PMID:26100570
Travers, L M; Simmons, L W; Garcia-Gonzalez, F
2016-05-01
Polyandry is widespread despite its costs. The sexually selected sperm hypotheses ('sexy' and 'good' sperm) posit that sperm competition plays a role in the evolution of polyandry. Two poorly studied assumptions of these hypotheses are the presence of additive genetic variance in polyandry and sperm competitiveness. Using a quantitative genetic breeding design in a natural population of Drosophila melanogaster, we first established the potential for polyandry to respond to selection. We then investigated whether polyandry can evolve through sexually selected sperm processes. We measured lifetime polyandry and offensive sperm competitiveness (P2 ) while controlling for sampling variance due to male × male × female interactions. We also measured additive genetic variance in egg-to-adult viability and controlled for its effect on P2 estimates. Female lifetime polyandry showed significant and substantial additive genetic variance and evolvability. In contrast, we found little genetic variance or evolvability in P2 or egg-to-adult viability. Additive genetic variance in polyandry highlights its potential to respond to selection. However, the low levels of genetic variance in sperm competitiveness suggest that the evolution of polyandry may not be driven by sexy sperm or good sperm processes. PMID:26801640
McFarlane, S Eryn; Gorrell, Jamieson C; Coltman, David W; Humphries, Murray M; Boutin, Stan; McAdam, Andrew G
2014-01-01
A trait must genetically correlate with fitness in order to evolve in response to natural selection, but theory suggests that strong directional selection should erode additive genetic variance in fitness and limit future evolutionary potential. Balancing selection has been proposed as a mechanism that could maintain genetic variance if fitness components trade off with one another and has been invoked to account for empirical observations of higher levels of additive genetic variance in fitness components than would be expected from mutation–selection balance. Here, we used a long-term study of an individually marked population of North American red squirrels (Tamiasciurus hudsonicus) to look for evidence of (1) additive genetic variance in lifetime reproductive success and (2) fitness trade-offs between fitness components, such as male and female fitness or fitness in high- and low-resource environments. “Animal model” analyses of a multigenerational pedigree revealed modest maternal effects on fitness, but very low levels of additive genetic variance in lifetime reproductive success overall as well as fitness measures within each sex and environment. It therefore appears that there are very low levels of direct genetic variance in fitness and fitness components in red squirrels to facilitate contemporary adaptation in this population. PMID:24963372
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-17
.... If the prism is not smaller than the existing levee cross section, it is unlikely that a variance... dimension the levee prism (see Figure 1). The prism is the minimum analytical cross section that, given site... require a larger prism. The prism must also satisfy the requirements of any other applicable standard....
Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J.
2008-01-01
Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655
Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J
2008-06-01
Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655
Kumar, Satish; Molloy, Claire; Muñoz, Patricio; Daetwyler, Hans; Chagné, David; Volz, Richard
2015-01-01
The nonadditive genetic effects may have an important contribution to total genetic variation of phenotypes, so estimates of both the additive and nonadditive effects are desirable for breeding and selection purposes. Our main objectives were to: estimate additive, dominance and epistatic variances of apple (Malus × domestica Borkh.) phenotypes using relationship matrices constructed from genome-wide dense single nucleotide polymorphism (SNP) markers; and compare the accuracy of genomic predictions using genomic best linear unbiased prediction models with or without including nonadditive genetic effects. A set of 247 clonally replicated individuals was assessed for six fruit quality traits at two sites, and also genotyped using an Illumina 8K SNP array. Across several fruit quality traits, the additive, dominance, and epistatic effects contributed about 30%, 16%, and 19%, respectively, to the total phenotypic variance. Models ignoring nonadditive components yielded upwardly biased estimates of additive variance (heritability) for all traits in this study. The accuracy of genomic predicted genetic values (GEGV) varied from about 0.15 to 0.35 for various traits, and these were almost identical for models with or without including nonadditive effects. However, models including nonadditive genetic effects further reduced the bias of GEGV. Between-site genotypic correlations were high (>0.85) for all traits, and genotype-site interaction accounted for <10% of the phenotypic variability. The accuracy of prediction, when the validation set was present only at one site, was generally similar for both sites, and varied from about 0.50 to 0.85. The prediction accuracies were strongly influenced by trait heritability, and genetic relatedness between the training and validation families. PMID:26497141
Costa, E V; Diniz, D B; Veroneze, R; Resende, M D V; Azevedo, C F; Guimaraes, S E F; Silva, F F; Lopes, P S
2015-01-01
Knowledge of dominance effects should improve ge-netic evaluations, provide the accurate selection of purebred animals, and enable better breeding strategies, including the exploitation of het-erosis in crossbreeds. In this study, we combined genomic and pedi-gree data to study the relative importance of additive and dominance genetic variation in growth and carcass traits in an F2 pig population. Two GBLUP models were used, a model without a polygenic effect (ADM) and a model with a polygenic effect (ADMP). Additive effects played a greater role in the control of growth and carcass traits than did dominance effects. However, dominance effects were important for all traits, particularly in backfat thickness. The narrow-sense and broad-sense heritability estimates for growth (0.06 to 0.42, and 0.10 to 0.51, respectively) and carcass traits (0.07 to 0.37, and 0.10 to 0.76, respec-tively) exhibited a wide variation. The inclusion of a polygenic effect in the ADMP model changed the broad-sense heritability estimates only for birth weight and weight at 21 days of age. PMID:26125833
Careau, Vincent; Wolak, Matthew E; Carter, Patrick A; Garland, Theodore
2015-11-22
Given the pace at which human-induced environmental changes occur, a pressing challenge is to determine the speed with which selection can drive evolutionary change. A key determinant of adaptive response to multivariate phenotypic selection is the additive genetic variance-covariance matrix ( G: ). Yet knowledge of G: in a population experiencing new or altered selection is not sufficient to predict selection response because G: itself evolves in ways that are poorly understood. We experimentally evaluated changes in G: when closely related behavioural traits experience continuous directional selection. We applied the genetic covariance tensor approach to a large dataset (n = 17 328 individuals) from a replicated, 31-generation artificial selection experiment that bred mice for voluntary wheel running on days 5 and 6 of a 6-day test. Selection on this subset of G: induced proportional changes across the matrix for all 6 days of running behaviour within the first four generations. The changes in G: induced by selection resulted in a fourfold slower-than-predicted rate of response to selection. Thus, selection exacerbated constraints within G: and limited future adaptive response, a phenomenon that could have profound consequences for populations facing rapid environmental change. PMID:26582016
Yang Kai; Huang, Shih-Ying; Packard, Nathan J.; Boone, John M.
2010-07-15
Purpose: A simplified linear model approach was proposed to accurately model the response of a flat panel detector used for breast CT (bCT). Methods: Individual detector pixel mean and variance were measured from bCT projection images acquired both in air and with a polyethylene cylinder, with the detector operating in both fixed low gain and dynamic gain mode. Once the coefficients of the linear model are determined, the fractional additive noise can be used as a quantitative metric to evaluate the system's efficiency in utilizing x-ray photons, including the performance of different gain modes of the detector. Results: Fractional additive noise increases as the object thickness increases or as the radiation dose to the detector decreases. For bCT scan techniques on the UC Davis prototype scanner (80 kVp, 500 views total, 30 frames/s), in the low gain mode, additive noise contributes 21% of the total pixel noise variance for a 10 cm object and 44% for a 17 cm object. With the dynamic gain mode, additive noise only represents approximately 2.6% of the total pixel noise variance for a 10 cm object and 7.3% for a 17 cm object. Conclusions: The existence of the signal-independent additive noise is the primary cause for a quadratic relationship between bCT noise variance and the inverse of radiation dose at the detector. With the knowledge of the additive noise contribution to experimentally acquired images, system modifications can be made to reduce the impact of additive noise and improve the quantum noise efficiency of the bCT system.
Kirkpatrick, Robert M.; McGue, Matt; Iacono, William G.
2015-01-01
The present study of general cognitive ability attempts to replicate and extend previous investigations of a biometric moderator, family-of-origin socioeconomic status (SES), in a sample of 2,494 pairs of adolescent twins, non-twin biological siblings, and adoptive siblings assessed with individually administered IQ tests. We hypothesized that SES would covary positively with additive-genetic variance and negatively with shared-environmental variance. Important potential confounds unaddressed in some past studies, such as twin-specific effects, assortative mating, and differential heritability by trait level, were found to be negligible. In our main analysis, we compared models by their sample-size corrected AIC, and base our statistical inference on model-averaged point estimates and standard errors. Additive-genetic variance increased with SES—an effect that was statistically significant and robust to model specification. We found no evidence that SES moderated shared-environmental influence. We attempt to explain the inconsistent replication record of these effects, and provide suggestions for future research. PMID:25539975
Kirkpatrick, Robert M; McGue, Matt; Iacono, William G
2015-03-01
The present study of general cognitive ability attempts to replicate and extend previous investigations of a biometric moderator, family-of-origin socioeconomic status (SES), in a sample of 2,494 pairs of adolescent twins, non-twin biological siblings, and adoptive siblings assessed with individually administered IQ tests. We hypothesized that SES would covary positively with additive-genetic variance and negatively with shared-environmental variance. Important potential confounds unaddressed in some past studies, such as twin-specific effects, assortative mating, and differential heritability by trait level, were found to be negligible. In our main analysis, we compared models by their sample-size corrected AIC, and base our statistical inference on model-averaged point estimates and standard errors. Additive-genetic variance increased with SES-an effect that was statistically significant and robust to model specification. We found no evidence that SES moderated shared-environmental influence. We attempt to explain the inconsistent replication record of these effects, and provide suggestions for future research. PMID:25539975
Reid, Jane M; Arcese, Peter; Keller, Lukas F; Losdat, Sylvain
2014-01-01
Ongoing evolution of polyandry, and consequent extra-pair reproduction in socially monogamous systems, is hypothesized to be facilitated by indirect selection stemming from cross-sex genetic covariances with components of male fitness. Specifically, polyandry is hypothesized to create positive genetic covariance with male paternity success due to inevitable assortative reproduction, driving ongoing coevolution. However, it remains unclear whether such covariances could or do emerge within complex polyandrous systems. First, we illustrate that genetic covariances between female extra-pair reproduction and male within-pair paternity success might be constrained in socially monogamous systems where female and male additive genetic effects can have opposing impacts on the paternity of jointly reared offspring. Second, we demonstrate nonzero additive genetic variance in female liability for extra-pair reproduction and male liability for within-pair paternity success, modeled as direct and associative genetic effects on offspring paternity, respectively, in free-living song sparrows (Melospiza melodia). The posterior mean additive genetic covariance between these liabilities was slightly positive, but the credible interval was wide and overlapped zero. Therefore, although substantial total additive genetic variance exists, the hypothesis that ongoing evolution of female extra-pair reproduction is facilitated by genetic covariance with male within-pair paternity success cannot yet be definitively supported or rejected either conceptually or empirically. PMID:24724612
Lessard, Benoît H; Dang, Jeremy D; Grant, Trevor M; Gao, Dong; Seferos, Dwight S; Bender, Timothy P
2014-09-10
Previous studies have shown that the use of bis(tri-n-hexylsilyl oxide) silicon phthalocyanine ((3HS)2-SiPc) as an additive in a P3HT:PC61BM cascade ternary bulk heterojunction organic photovoltaic (BHJ OPV) device results in an increase in the short circuit current (J(SC)) and efficiency (η(eff)) of up to 25% and 20%, respectively. The previous studies have attributed the increase in performance to the presence of (3HS)2-SiPc at the BHJ interface. In this study, we explored the molecular characteristics of (3HS)2-SiPc which makes it so effective in increasing the OPV device J(SC) and η(eff. Initially, we synthesized phthalocyanine-based additives using different core elements such as germanium and boron instead of silicon, each having similar frontier orbital energies compared to (3HS)2-SiPc and tested their effect on BHJ OPV device performance. We observed that addition of bis(tri-n-hexylsilyl oxide) germanium phthalocyanine ((3HS)2-GePc) or tri-n-hexylsilyl oxide boron subphthalocyanine (3HS-BsubPc) resulted in a nonstatistically significant increase in JSC and η(eff). Secondly, we kept the silicon phthalocyanine core and substituted the tri-n-hexylsilyl solubilizing groups with pentadecyl phenoxy groups and tested the resulting dye in a BHJ OPV. While an increase in JSC and η(eff) was observed at low (PDP)2-SiPc loadings, the increase was not as significant as (3HS)2-SiPc; therefore, (3HS)2-SiPc is a unique additive. During our study, we observed that (3HS)2-SiPc had an extraordinary tendency to crystallize compared to the other compounds in this study and our general experience. On the basis of this observation, we have offered a hypothesis that when (3HS)2-SiPc migrates to the P3HT:PC61BM interface the reason for its unique performance is not solely due to its frontier orbital energies but also might be due to a high driving force for crystallization. PMID:25105425
Berge, Jerica M.; Wall, Melanie; Larson, Nicole; Eisenberg, Marla E.; Loth, Katie A.; Neumark-Sztainer, Dianne
2012-01-01
Objective To examine the unique and additive associations of family functioning and parenting practices with adolescent disordered eating behaviors (i.e., dieting, unhealthy weight control behaviors, binge eating). Methods Data from EAT (Eating and Activity in Teens) 2010, a population-based study assessing eating and activity among racially/ethnically and socio-economically diverse adolescents (n = 2,793; mean age = 14.4, SD = 2.0; age range = 11–19) was used. Logistic regression models were used to examine associations between adolescent dieting and disordered eating behaviors and family functioning and parenting variables, including interactions. All analyses controlled for demographics and body mass index. Results Higher family functioning, parent connection, and parental knowledge about child’s whereabouts (e.g. who child is with, what they are doing, where they are at) were significantly associated with lower odds of engaging in dieting and disordered eating behaviors in adolescents, while parent psychological control was associated with greater odds of engaging in dieting and disordered eating behaviors. Although the majority of interactions were non-significant, parental psychological control moderated the protective relationship between family functioning and disordered eating behaviors in adolescent girls. Conclusions Clinicians and health care providers may want to discuss the importance of balancing specific parenting behaviors, such as increasing parent knowledge about child whereabouts while decreasing psychological control in order to enhance the protective relationship between family functioning and disordered eating behaviors in adolescents. PMID:23196919
Recio, Sergio A; Iliescu, Adela F; Bergés, Germán D; Gil, Marta; de Brugada, Isabel
2016-04-01
It has been suggested that human perceptual learning could be explained in terms of a better memory encoding of the unique features during intermixed exposure. However, it is possible that a location bias could play a relevant role in explaining previous results of perceptual learning studies using complex visual stimuli. If this were the case, the only relevant feature would be the location, rather than the content, of the unique features. To further explore this possibility, we attempted to replicate the results of Lavis, Kadib, Mitchell, and Hall (2011, Experiment 2), which showed that additional exposure to the unique elements resulted in better discrimination than simple intermixed exposure. We manipulated the location of the unique elements during the additional exposure. In one experiment, they were located in the same position as that when presented together with the common element. In another experiment, the unique elements were located in the center of the screen, regardless of where they were located together with the common element. Our results showed that additional exposure only improved discrimination when the unique elements were presented in the same position as when they were presented together with the common element. The results reported here do not provide support for the explanation of the effects of additional exposure of the unique elements in terms of a better memory encoding and instead suggest an explanation in terms of location bias. (PsycINFO Database Record PMID:26881901
Kitayama, Takashi; Okamoto, Tadashi; Hill, Richard K.; Kawai, Yasushi; Takahashi, Sho; Yonemori, Shigetomo; Yamamoto, Yukio; Ohe, Kouichi; Uemura, Sakae; Sawada, Seiji
1999-04-16
Zerumbone (1) was isolated from fresh rhizomes of Zingiber zerumbet Smith in yields of 0.3-0.4% by simple steam distillation and recrystallization. 1 accepted 2 equiv of hydrogen cyanide at the C6 and C9 double bonds of the cross-conjugated dienone system to give a mixture of diastereomers 3a-d. In the presence of potassium cyanide, the dominant isomer 3a was isomerized to a mixture of 3a-d. Under controlled conditions, 1 added one mole of methanol regio- and stereoselectively at the C6 double bond to give adduct 4a. With potassium cyanide, 4a was transformed to the mixture of 3a-d. 1 took up one mole of bromine at C6 double bond to give a diastereomeric mixture of adducts 5a and 5b. Treatment of 5a with potassium cyanide gave a mixture of cyclopropanecarboxylic acid 6a and 6b. This unique ring-contracting cyclopropane formation is pictured as a sequential Favorskii type reaction. alpha-Cyclodextrin improved the selectivity and yields of the reactions conducted in an aqueous medium. PMID:11674334
NASA Technical Reports Server (NTRS)
Smalheer, C. V.
1973-01-01
The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.
Nuclear Material Variance Calculation
1995-01-01
MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet that significantly reduces the effort required to make the variance and covariance calculations needed to determine the detection sensitivity of a materials accounting system and loss of special nuclear material (SNM). The user is required to enter information into one of four data tables depending on the type of term in the materials balance (MB) equation. The four data tables correspond to input transfers, output transfers,more » and two types of inventory terms, one for nondestructive assay (NDA) measurements and one for measurements made by chemical analysis. Each data entry must contain an identification number and a short description, as well as values for the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements during an accounting period. The user must also specify the type of error model (additive or multiplicative) associated with each measurement, and possible correlations between transfer terms. Predefined spreadsheet macros are used to perform the variance and covariance calculations for each term based on the corresponding set of entries. MAVARIC has been used for sensitivity studies of chemical separation facilities, fuel processing and fabrication facilities, and gas centrifuge and laser isotope enrichment facilities.« less
Cosmology without cosmic variance
Bernstein, Gary M.; Cai, Yan -Chuan
2011-10-01
The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing the number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.
Cosmology without cosmic variance
Bernstein, Gary M.; Cai, Yan -Chuan
2011-10-01
The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing themore » number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.« less
Nonlinear Epigenetic Variance: Review and Simulations
ERIC Educational Resources Information Center
Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.
2010-01-01
We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…
Monte Carlo variance reduction
NASA Technical Reports Server (NTRS)
Byrn, N. R.
1980-01-01
Computer program incorporates technique that reduces variance of forward Monte Carlo method for given amount of computer time in determining radiation environment in complex organic and inorganic systems exposed to significant amounts of radiation.
Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A. E-mail: rix@mpia.de E-mail: janewman@pitt.edu
2011-04-20
Deep pencil beam surveys (<1 deg{sup 2}) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size {Delta}z. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , {Delta}z, and stellar mass m{sub *}. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates ({delta}{sigma}{sub v}/{sigma}{sub v}) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with {Delta}z = 0.5, the relative cosmic variance of galaxies with m{sub *}>10{sup 11} M{sub sun} is {approx}38%, while it is {approx}27% for GEMS and {approx}12% for COSMOS. For galaxies of m{sub *} {approx} 10{sup 10} M{sub sun}, the relative cosmic variance is {approx}19% for GOODS, {approx}13% for GEMS, and {approx}6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z
NASA Astrophysics Data System (ADS)
Moster, Benjamin P.; Somerville, Rachel S.; Newman, Jeffrey A.; Rix, Hans-Walter
2011-04-01
Deep pencil beam surveys (<1 deg2) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by "cosmic variance." This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift \\bar{z} and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, \\bar{z}, Δz, and stellar mass m *. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at \\bar{z}=2 and with Δz = 0.5, the relative cosmic variance of galaxies with m *>1011 M sun is ~38%, while it is ~27% for GEMS and ~12% for COSMOS. For galaxies of m * ~ 1010 M sun, the relative cosmic variance is ~19% for GOODS, ~13% for GEMS, and ~6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at \\bar{z}=2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic variance is
Getting around cosmic variance
Kamionkowski, M.; Loeb, A.
1997-10-01
Cosmic microwave background (CMB) anisotropies probe the primordial density field at the edge of the observable Universe. There is a limiting precision ({open_quotes}cosmic variance{close_quotes}) with which anisotropies can determine the amplitude of primordial mass fluctuations. This arises because the surface of last scatter (SLS) probes only a finite two-dimensional slice of the Universe. Probing other SLS{close_quote}s observed from different locations in the Universe would reduce the cosmic variance. In particular, the polarization of CMB photons scattered by the electron gas in a cluster of galaxies provides a measurement of the CMB quadrupole moment seen by the cluster. Therefore, CMB polarization measurements toward many clusters would probe the anisotropy on a variety of SLS{close_quote}s within the observable Universe, and hence reduce the cosmic-variance uncertainty. {copyright} {ital 1997} {ital The American Physical Society}
Videotape Project in Child Variance. Final Report.
ERIC Educational Resources Information Center
Morse, William C.; Smith, Judith M.
The design, production, dissemination, and evaluation of a series of videotaped training packages designed to enable teachers, parents, and paraprofessionals to interpret child variance in light of personal and alternative perspectives of behavior are discussed. The goal of each package is to highlight unique contributions of different theoretical…
Variance Anisotropy in Kinetic Plasmas
NASA Astrophysics Data System (ADS)
Parashar, Tulasi N.; Oughton, Sean; Matthaeus, William H.; Wan, Minping
2016-06-01
Solar wind fluctuations admit well-documented anisotropies of the variance matrix, or polarization, related to the mean magnetic field direction. Typically, one finds a ratio of perpendicular variance to parallel variance of the order of 9:1 for the magnetic field. Here we study the question of whether a kinetic plasma spontaneously generates and sustains parallel variances when initiated with only perpendicular variance. We find that parallel variance grows and saturates at about 5% of the perpendicular variance in a few nonlinear times irrespective of the Reynolds number. For sufficiently large systems (Reynolds numbers) the variance approaches values consistent with the solar wind observations.
Conversations across Meaning Variance
ERIC Educational Resources Information Center
Cordero, Alberto
2013-01-01
Progressive interpretations of scientific theories have long been denounced as naive, because of the inescapability of meaning variance. The charge reportedly applies to recent realist moves that focus on theory-parts rather than whole theories. This paper considers the question of what "theory-parts" of epistemic significance (if any) relevantly…
Minimum variance geographic sampling
NASA Technical Reports Server (NTRS)
Terrell, G. R. (Principal Investigator)
1980-01-01
Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.
ERIC Educational Resources Information Center
Braun, W. John
2012-01-01
The Analysis of Variance is often taught in introductory statistics courses, but it is not clear that students really understand the method. This is because the derivation of the test statistic and p-value requires a relatively sophisticated mathematical background which may not be well-remembered or understood. Thus, the essential concept behind…
Spectral Ambiguity of Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
1996-01-01
We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.
Biclustering with heterogeneous variance.
Chen, Guanhua; Sullivan, Patrick F; Kosorok, Michael R
2013-07-23
In cancer research, as in all of medicine, it is important to classify patients into etiologically and therapeutically relevant subtypes to improve diagnosis and treatment. One way to do this is to use clustering methods to find subgroups of homogeneous individuals based on genetic profiles together with heuristic clinical analysis. A notable drawback of existing clustering methods is that they ignore the possibility that the variance of gene expression profile measurements can be heterogeneous across subgroups, and methods that do not consider heterogeneity of variance can lead to inaccurate subgroup prediction. Research has shown that hypervariability is a common feature among cancer subtypes. In this paper, we present a statistical approach that can capture both mean and variance structure in genetic data. We demonstrate the strength of our method in both synthetic data and in two cancer data sets. In particular, our method confirms the hypervariability of methylation level in cancer patients, and it detects clearer subgroup patterns in lung cancer data. PMID:23836637
Systems Engineering Programmatic Estimation Using Technology Variance
NASA Technical Reports Server (NTRS)
Mog, Robert A.
2000-01-01
Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed on the subsystems and components comprising the system of interest. Technological "return" and "variation" parameters are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.
Systems Engineering Programmatic Estimation Using Technology Variance
NASA Technical Reports Server (NTRS)
Mog, Robert A.
2000-01-01
Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed oil the subsystems and components comprising the system of interest. Technological "returns" and "variation" parameters, are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.
Spectral variance of aeroacoustic data
NASA Technical Reports Server (NTRS)
Rao, K. V.; Preisser, J. S.
1981-01-01
An asymptotic technique for estimating the variance of power spectra is applied to aircraft flyover noise data. The results are compared with directly estimated variances and they are in reasonable agreement. The basic time series need not be Gaussian for asymptotic theory to apply. The asymptotic variance formulae can be useful tools both in the design and analysis phase of experiments of this type.
NASA Astrophysics Data System (ADS)
Mahmud, Mohammad Sultan; Cadotte, David W.; Vuong, Barry; Sun, Carry; Luk, Timothy W. H.; Mariampillai, Adrian; Yang, Victor X. D.
2013-05-01
High-resolution mapping of microvasculature has been applied to diverse body systems, including the retinal and choroidal vasculature, cardiac vasculature, the central nervous system, and various tumor models. Many imaging techniques have been developed to address specific research questions, and each has its own merits and drawbacks. Understanding, optimization, and proper implementation of these imaging techniques can significantly improve the data obtained along the spectrum of unique research projects to obtain diagnostic clinical information. We describe the recently developed algorithms and applications of two general classes of microvascular imaging techniques: speckle-variance and phase-variance optical coherence tomography (OCT). We compare and contrast their performance with Doppler OCT and optical microangiography. In addition, we highlight ongoing work in the development of variance-based techniques to further refine the characterization of microvascular networks.
Budget variance analysis using RVUs.
Berlin, M F; Budzynski, M R
1998-01-01
This article details the use of the variance analysis as management tool to evaluate the financial health of the practice. A common financial tool for administrators has been a simple calculation measuring the difference between actual financials vs. budget financials. Standard cost accounting provides a methodology known as variance analysis to better understand the actual vs. budgeted financial streams. The standard variance analysis has been modified by applying relative value units (RVUs) as standards for the practice. PMID:10387247
Minimum variance beamformer weights revisited.
Moiseev, Alexander; Doesburg, Sam M; Grunau, Ruth E; Ribary, Urs
2015-10-15
Adaptive minimum variance beamformers are widely used analysis tools in MEG and EEG. When the target brain activity presents in the form of spatially localized responses, the procedure usually involves two steps. First, positions and orientations of the sources of interest are determined. Second, the filter weights are calculated and source time courses reconstructed. This last step is the object of the current study. Despite different approaches utilized at the source localization stage, basic expressions for the weights have the same form, dictated by the minimum variance condition. These classic expressions involve covariance matrix of the measured field, which includes contributions from both the sources of interest and the noise background. We show analytically that the same weights can alternatively be obtained, if the full field covariance is replaced with that of the noise, provided the beamformer points to the true sources precisely. In practice, however, a certain mismatch is always inevitable. We show that such mismatch results in partial suppression of the true sources if the traditional weights are used. To avoid this effect, the "alternative" weights based on properly estimated noise covariance should be applied at the second, source time course reconstruction step. We demonstrate mathematically and using simulated and real data that in many situations the alternative weights provide significantly better time course reconstruction quality than the traditional ones. In particular, they a) improve source-level SNR and yield more accurately reconstructed waveforms; b) provide more accurate estimates of inter-source correlations; and c) reduce the adverse influence of the source correlations on the performance of single-source beamformers, which are used most often. Importantly, the alternative weights come at no additional computational cost, as the structure of the expressions remains the same. PMID:26143207
Decomposition of Variance for Spatial Cox Processes
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
2012-01-01
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees. PMID:23599558
Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.
Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S
2016-04-01
Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity. PMID:26995641
A proxy for variance in dense matching over homogeneous terrain
NASA Astrophysics Data System (ADS)
Altena, Bas; Cockx, Liesbet; Goedemé, Toon
2014-05-01
Automation in photogrammetry and avionics have brought highly autonomous UAV mapping solutions on the market. These systems have great potential for geophysical research, due to their mobility and simplicity of work. Flight planning can be done on site and orientation parameters are estimated automatically. However, one major drawback is still present: if contrast is lacking, stereoscopy fails. Consequently, topographic information cannot be obtained precisely through photogrammetry for areas with low contrast. Even though more robustness is added in the estimation through multi-view geometry, a precise product is still lacking. For the greater part, interpolation is applied over these regions, where the estimation is constrained by uniqueness, its epipolar line and smoothness. Consequently, digital surface models are generated with an estimate of the topography, without holes but also without an indication of its variance. Every dense matching algorithm is based on a similarity measure. Our methodology uses this property to support the idea that if only noise is present, no correspondence can be detected. Therefore, the noise level is estimated in respect to the intensity signal of the topography (SNR) and this ratio serves as a quality indicator for the automatically generated product. To demonstrate this variance indicator, two different case studies were elaborated. The first study is situated at an open sand mine near the village of Kiezegem, Belgium. Two different UAV systems flew over the site. One system had automatic intensity regulation, and resulted in low contrast over the sandy interior of the mine. That dataset was used to identify the weak estimations of the topography and was compared with the data from the other UAV flight. In the second study a flight campaign with the X100 system was conducted along the coast near Wenduine, Belgium. The obtained images were processed through structure-from-motion software. Although the beach had a very low
Integrating Variances into an Analytical Database
NASA Technical Reports Server (NTRS)
Sanchez, Carlos
2010-01-01
For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.
Dominance Genetic Variance for Traits Under Directional Selection in Drosophila serrata
Sztepanacz, Jacqueline L.; Blows, Mark W.
2015-01-01
In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait–fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. PMID:25783700
Dominance genetic variance for traits under directional selection in Drosophila serrata.
Sztepanacz, Jacqueline L; Blows, Mark W
2015-05-01
In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait-fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. PMID:25783700
A Computer Program to Determine Reliability Using Analysis of Variance
ERIC Educational Resources Information Center
Burns, Edward
1976-01-01
A computer program, written in Fortran IV, is described which assesses reliability by using analysis of variance. It produces a complete analysis of variance table in addition to reliability coefficients for unadjusted and adjusted data as well as the intraclass correlation for m subjects and n items. (Author)
Estimation of Variance Components of Quantitative Traits in Inbred Populations
Abney, Mark; McPeek, Mary Sara; Ober, Carole
2000-01-01
Summary Use of variance-component estimation for mapping of quantitative-trait loci in humans is a subject of great current interest. When only trait values, not genotypic information, are considered, variance-component estimation can also be used to estimate heritability of a quantitative trait. Inbred pedigrees present special challenges for variance-component estimation. First, there are more variance components to be estimated in the inbred case, even for a relatively simple model including additive, dominance, and environmental effects. Second, more identity coefficients need to be calculated from an inbred pedigree in order to perform the estimation, and these are computationally more difficult to obtain in the inbred than in the outbred case. As a result, inbreeding effects have generally been ignored in practice. We describe here the calculation of identity coefficients and estimation of variance components of quantitative traits in large inbred pedigrees, using the example of HDL in the Hutterites. We use a multivariate normal model for the genetic effects, extending the central-limit theorem of Lange to allow for both inbreeding and dominance under the assumptions of our variance-component model. We use simulated examples to give an indication of under what conditions one has the power to detect the additional variance components and to examine their impact on variance-component estimation. We discuss the implications for mapping and heritability estimation by use of variance components in inbred populations. PMID:10677322
Latitude dependence of eddy variances
NASA Technical Reports Server (NTRS)
Bowman, Kenneth P.; Bell, Thomas L.
1987-01-01
The eddy variance of a meteorological field must tend to zero at high latitudes due solely to the nature of spherical polar coordinates. The zonal averaging operator defines a length scale: the circumference of the latitude circle. When the circumference of the latitude circle is greater than the correlation length of the field, the eddy variance from transient eddies is the result of differences between statistically independent regions. When the circumference is less than the correlation length, the eddy variance is computed from points that are well correlated with each other, and so is reduced. The expansion of a field into zonal Fourier components is also influenced by the use of spherical coordinates. As is well known, a phenomenon of fixed wavelength will have different zonal wavenumbers at different latitudes. Simple analytical examples of these effects are presented along with an observational example from satellite ozone data. It is found that geometrical effects can be important even in middle latitudes.
Modeling variance structure of body shape traits of Lipizzan horses.
Kaps, M; Curik, I; Baban, M
2010-09-01
Heterogeneity of variance of growth traits over age is a common issue in estimating genetic parameters and is addressed in this study by selecting appropriate variance structure models for additive genetic and environmental variances. Modeling and partitioning those variances connected with analyzing small data sets were demonstrated on Lipizzan horses. The following traits were analyzed: withers height, chest girth, and cannon bone circumference. The measurements were taken at birth, and at approximately 6, 12, 24, and 36 mo of age of 660 Lipizzan horses born in Croatia between 1948 and 2000. The corresponding pedigree file consisted of 1,458 horses. Sex, age of dam, and stud-year-season interaction were considered fixed effects; additive genetic and permanent environment effects were defined as random. Linear adjustments of age at measuring were done within measuring groups. Maternal effects were included only for measurements taken at birth and at 6 mo. Additive genetic variance structures were modeled by using uniform structures or structures based on polynomial random regression. Environmental variance structures were modeled by using one of the following models: unstructured, exponential, Gaussian, or combinations of identity or diagonal with structures based on polynomial random regression. The parameters were estimated by using REML. Comparison and fits of the models were assessed by using Akaike and Bayesian information criteria, and by checking graphically the adequacy of the shape of the overall (phenotypic) and component (additive genetic and environmental) variance functions. The best overall fit was obtained from models with unstructured error variance. Compared with the model with uniform additive genetic variance, models with structures based on random regression only slightly improved overall fit. Exponential and Gaussian models were generally not suitable because they do not accommodate adequately heterogeneity of variance. Using the unstructured
The Variance Reaction Time Model
ERIC Educational Resources Information Center
Sikstrom, Sverker
2004-01-01
The variance reaction time model (VRTM) is proposed to account for various recognition data on reaction time, the mirror effect, receiver-operating-characteristic (ROC) curves, etc. The model is based on simple and plausible assumptions within a neural network: VRTM is a two layer neural network where one layer represents items and one layer…
Analysis of Variance: Variably Complex
ERIC Educational Resources Information Center
Drummond, Gordon B.; Vowler, Sarah L.
2012-01-01
These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution of…
Variance of a Few Observations
ERIC Educational Resources Information Center
Joarder, Anwar H.
2009-01-01
This article demonstrates that the variance of three or four observations can be expressed in terms of the range and the first order differences of the observations. A more general result, which holds for any number of observations, is also stated.
Relating the Hadamard Variance to MCS Kalman Filter Clock Estimation
NASA Technical Reports Server (NTRS)
Hutsell, Steven T.
1996-01-01
The Global Positioning System (GPS) Master Control Station (MCS) currently makes significant use of the Allan Variance. This two-sample variance equation has proven excellent as a handy, understandable tool, both for time domain analysis of GPS cesium frequency standards, and for fine tuning the MCS's state estimation of these atomic clocks. The Allan Variance does not explicitly converge for the nose types of alpha less than or equal to minus 3 and can be greatly affected by frequency drift. Because GPS rubidium frequency standards exhibit non-trivial aging and aging noise characteristics, the basic Allan Variance analysis must be augmented in order to (a) compensate for a dynamic frequency drift, and (b) characterize two additional noise types, specifically alpha = minus 3, and alpha = minus 4. As the GPS program progresses, we will utilize a larger percentage of rubidium frequency standards than ever before. Hence, GPS rubidium clock characterization will require more attention than ever before. The three sample variance, commonly referred to as a renormalized Hadamard Variance, is unaffected by linear frequency drift, converges for alpha is greater than minus 5, and thus has utility for modeling noise in GPS rubidium frequency standards. This paper demonstrates the potential of Hadamard Variance analysis in GPS operations, and presents an equation that relates the Hadamard Variance to the MCS's Kalman filter process noises.
10 CFR 851.31 - Variance process.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Variance process. 851.31 Section 851.31 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application. Contractors desiring a variance from a safety and health standard, or portion thereof, may submit a...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Variances. 307.22 Section 307.22....22 Variances. EDA may approve variances to the requirements contained in this subpart, provided such variances: (a) Are consistent with the goals of the Economic Adjustment Assistance program and with an...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 7 2010-07-01 2010-07-01 false Variances. 1920.2 Section 1920.2 Labor Regulations Relating...' COMPENSATION ACT § 1920.2 Variances. (a) Variances from standards in parts 1915 through 1918 of this chapter may be granted in the same circumstances in which variances may be granted under sections 6(b)...
Juárez-Pérez, Emilio José; Aragoni, M Carla; Arca, Massimiliano; Blake, Alexander J; Devillanova, Francesco A; Garau, Alessandra; Isaia, Francesco; Lippolis, Vito; Núñez, Rosario; Pintus, Anna; Wilson, Claire
2011-10-01
The reactivity of the imidazoline-2-selone derivatives 1,1'-methylenebis(3-methyl-4-imidazoline-2-selone) (D1) and 1,2-ethylenebis(3-methyl-4-imidazoline-2-selone) (D2) towards the interhalogens IBr and ICl has been investigated in the solid state with the aim of synthesising "T-shaped" hypervalent chalcogen compounds featuring the extremely rare linear asymmetric I-E-X moieties (E=S, Se; X=Br, Cl). X-ray diffraction analysis and FT-Raman measurements provided a clear indication of the presence in the compounds obtained of discrete molecular adducts containing I-Se-Br and I-Se-Cl hypervalent moieties following a unique oxidative addition of interhalogens IX (X=Cl, Br) to the organoselone ligands. In all asymmetric hypervalent systems isolated, a strong polarisation was observed, with longer bond lengths at the selenium atom involving the most electronegative halogen. A topological electron density analysis on model compounds based on the quantum theory of atoms-in-molecules (QTAIM) and electron localisation function (ELF) established the three-centre-four-electron (3c-4e) nature of the bonding in these very polarised selenium hypervalent systems and new criteria were suggested to define and ascertain the hypervalency of the selenium atoms in these and related halogen and interhalogen adducts. PMID:21953928
Variance decomposition in stochastic simulators
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
NASA Astrophysics Data System (ADS)
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Estimating the Modified Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, Charles
1995-01-01
The third-difference approach to modified Allan variance (MVAR) leads to a tractable formula for a measure of MVAR estimator confidence, the equivalent degrees of freedom (edf), in the presence of power-law phase noise. The effect of estimation stride on edf is tabulated. A simple approximation for edf is given, and its errors are tabulated. A theorem allowing conservative estimates of edf in the presence of compound noise processes is given.
Neutrino mass without cosmic variance
NASA Astrophysics Data System (ADS)
LoVerde, Marilena
2016-05-01
Measuring the absolute scale of the neutrino masses is one of the most exciting opportunities available with near-term cosmological data sets. Two quantities that are sensitive to neutrino mass, scale-dependent halo bias b (k ) and the linear growth parameter f (k ) inferred from redshift-space distortions, can be measured without cosmic variance. Unlike the amplitude of the matter power spectrum, which always has a finite error, the error on b (k ) and f (k ) continues to decrease as the number density of tracers increases. This paper presents forecasts for statistics of galaxy and lensing fields that are sensitive to neutrino mass via b (k ) and f (k ). The constraints on neutrino mass from the auto- and cross-power spectra of spectroscopic and photometric galaxy samples are weakened by scale-dependent bias unless a very high density of tracers is available. In the high-density limit, using multiple tracers allows cosmic variance to be beaten, and the forecasted errors on neutrino mass shrink dramatically. In practice, beating the cosmic-variance errors on neutrino mass with b (k ) will be a challenge, but this signal is nevertheless a new probe of neutrino effects on structure formation that is interesting in its own right.
Partitioning Predicted Variance into Constituent Parts: A Primer on Regression Commonality Analysis.
ERIC Educational Resources Information Center
Amado, Alfred J.
Commonality analysis is a method of decomposing the R squared in a multiple regression analysis into the proportion of explained variance of the dependent variable associated with each independent variable uniquely and the proportion of explained variance associated with the common effects of one or more independent variables in various…
Increasing Genetic Variance of Body Mass Index during the Swedish Obesity Epidemic
Rokholm, Benjamin; Silventoinen, Karri; Tynelius, Per; Gamborg, Michael; Sørensen, Thorkild I. A.; Rasmussen, Finn
2011-01-01
Background and Objectives There is no doubt that the dramatic worldwide increase in obesity prevalence is due to changes in environmental factors. However, twin and family studies suggest that genetic differences are responsible for the major part of the variation in adiposity within populations. Recent studies show that the genetic effects on body mass index (BMI) may be stronger when combined with presumed risk factors for obesity. We tested the hypothesis that the genetic variance of BMI has increased during the obesity epidemic. Methods The data comprised height and weight measurements of 1,474,065 Swedish conscripts at age 18–19 y born between 1951 and 1983. The data were linked to the Swedish Multi-Generation Register and the Swedish Twin Register from which 264,796 full-brother pairs, 1,736 monozygotic (MZ) and 1,961 dizygotic (DZ) twin pairs were identified. The twin pairs were analysed to identify the most parsimonious model for the genetic and environmental contribution to BMI variance. The full-brother pairs were subsequently divided into subgroups by year of birth to investigate trends in the genetic variance of BMI. Results The twin analysis showed that BMI variation could be explained by additive genetic and environmental factors not shared by co-twins. On the basis of the analyses of the full-siblings, the additive genetic variance of BMI increased from 4.3 [95% CI 4.04–4.53] to 7.9 [95% CI 7.28–8.54] within the study period, as did the unique environmental variance, which increased from 1.4 [95% CI 1.32–1.48] to 2.0 [95% CI 1.89–2.22]. The BMI heritability increased from 75% to 78.8%. Conclusion The results confirm the hypothesis that the additive genetic variance of BMI has increased strongly during the obesity epidemic. This suggests that the obesogenic environment has enhanced the influence of adiposity related genes. PMID:22087252
A Wavelet Perspective on the Allan Variance.
Percival, Donald B
2016-04-01
The origins of the Allan variance trace back 50 years ago to two seminal papers, one by Allan (1966) and the other by Barnes (1966). Since then, the Allan variance has played a leading role in the characterization of high-performance time and frequency standards. Wavelets first arose in the early 1980s in the geophysical literature, and the discrete wavelet transform (DWT) became prominent in the late 1980s in the signal processing literature. Flandrin (1992) briefly documented a connection between the Allan variance and a wavelet transform based upon the Haar wavelet. Percival and Guttorp (1994) noted that one popular estimator of the Allan variance-the maximal overlap estimator-can be interpreted in terms of a version of the DWT now widely referred to as the maximal overlap DWT (MODWT). In particular, when the MODWT is based on the Haar wavelet, the variance of the resulting wavelet coefficients-the wavelet variance-is identical to the Allan variance when the latter is multiplied by one-half. The theory behind the wavelet variance can thus deepen our understanding of the Allan variance. In this paper, we review basic wavelet variance theory with an emphasis on the Haar-based wavelet variance and its connection to the Allan variance. We then note that estimation theory for the wavelet variance offers a means of constructing asymptotically correct confidence intervals (CIs) for the Allan variance without reverting to the common practice of specifying a power-law noise type a priori. We also review recent work on specialized estimators of the wavelet variance that are of interest when some observations are missing (gappy data) or in the presence of contamination (rogue observations or outliers). It is a simple matter to adapt these estimators to become estimators of the Allan variance. Finally we note that wavelet variances based upon wavelets other than the Haar offer interesting generalizations of the Allan variance. PMID:26529757
Estimating the Modified Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, Charles
1995-01-01
A paper at the 1992 FCS showed how to express the modified Allan variance (mvar) in terms of the third difference of the cumulative sum of time residuals. Although this reformulated definition was presented merely as a computational trick for simplifying the calculation of mvar estimates, it has since turned out to be a powerful theoretical tool for deriving the statistical quality of those estimates in terms of their equivalent degrees of freedom (edf), defined for an estimator V by edf V = 2(EV)2/(var V). Confidence intervals for mvar can then be constructed from levels of the appropriate 2 distribution.
Some Uniqueness Results for PARAFAC2.
ERIC Educational Resources Information Center
ten Berge, Jos M. F.; Kiers, Henk A. L.
1996-01-01
Some uniqueness properties are presented for the PARAFAC2 model for covariance matrices, focusing on uniqueness in the rank two case of PARAFAC2. PARAFAC2 is shown to be usually unique with four matrices, but not unique with three unless a certain additional assumption is introduced. (SLD)
Variance decomposition in stochastic simulators.
Le Maître, O P; Knio, O M; Moraes, A
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models. PMID:26133418
Variance analysis. Part I, Extending flexible budget variance analysis to acuity.
Finkler, S A
1991-01-01
The author reviews the concepts of flexible budget variance analysis, including the price, quantity, and volume variances generated by that technique. He also introduces the concept of acuity variance and provides direction on how such a variance measure can be calculated. Part II in this two-part series on variance analysis will look at how personal computers can be useful in the variance analysis process. PMID:1870002
40 CFR 52.2183 - Variance provision.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) South Dakota § 52.2183 Variance provision. The revisions to the variance provisions in Chapter 74:26:01:31.01 of the South Dakota Air...
40 CFR 52.2183 - Variance provision.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) South Dakota § 52.2183 Variance provision. The revisions to the variance provisions in Chapter 74:26:01:31.01 of the South Dakota Air...
40 CFR 52.2183 - Variance provision.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) South Dakota § 52.2183 Variance provision. The revisions to the variance provisions in Chapter 74:26:01:31.01 of the South Dakota Air...
Speed Variance and Its Influence on Accidents.
ERIC Educational Resources Information Center
Garber, Nicholas J.; Gadirau, Ravi
A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Variances. 1010.4 Section 1010.4 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) RADIOLOGICAL HEALTH PERFORMANCE STANDARDS FOR ELECTRONIC PRODUCTS: GENERAL General Provisions § 1010.4 Variances. (a) Criteria for variances. (1) Upon application by...
40 CFR 52.2183 - Variance provision.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 4 2010-07-01 2010-07-01 false Variance provision. 52.2183 Section 52...) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) South Dakota § 52.2183 Variance provision. The revisions to the variance provisions in Chapter 74:26:01:31.01 of the South Dakota Air...
40 CFR 142.41 - Variance request.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Variance request. 142.41 Section 142...) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.41 Variance request. A supplier of water may request the granting of...
10 CFR 851.31 - Variance process.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Variance process. 851.31 Section 851.31 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application..., practices, means, methods, operations, or processes used or proposed to be used by the contractor; and...
10 CFR 851.31 - Variance process.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Variance process. 851.31 Section 851.31 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application..., practices, means, methods, operations, or processes used or proposed to be used by the contractor; and...
10 CFR 851.31 - Variance process.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 4 2013-01-01 2013-01-01 false Variance process. 851.31 Section 851.31 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application..., practices, means, methods, operations, or processes used or proposed to be used by the contractor; and...
10 CFR 851.31 - Variance process.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Variance process. 851.31 Section 851.31 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application..., practices, means, methods, operations, or processes used or proposed to be used by the contractor; and...
40 CFR 52.2183 - Variance provision.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 4 2011-07-01 2011-07-01 false Variance provision. 52.2183 Section 52.2183 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) South Dakota § 52.2183 Variance provision. The revisions to the variance provisions...
40 CFR 142.43 - Disposition of a variance request.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) Compliance (including increments of progress) by the public water system with each contaminant level... control measures as the Administrator may require for each contaminant covered by the variance. (d) The... the Administrator. (f) The proposed schedule for implementation of additional interim control...
Simulation testing of unbiasedness of variance estimators
Link, W.A.
1993-01-01
In this article I address the evaluation of estimators of variance for parameter estimates. Given an unbiased estimator X of a parameter 0, and an estimator V of the variance of X, how does one test (via simulation) whether V is an unbiased estimator of the variance of X? The derivation of the test statistic illustrates the need for care in substituting consistent estimators for unknown parameters.
Code of Federal Regulations, 2014 CFR
2014-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2010 CFR
2010-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2011 CFR
2011-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2013 CFR
2013-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2012 CFR
2012-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Sensor/Actuator Selection for the Constrained Variance Control Problem
NASA Technical Reports Server (NTRS)
Delorenzo, M. L.; Skelton, R. E.
1985-01-01
The problem of designing a linear controller for systems subject to inequality variance constraints is considered. A quadratic penalty function approach is used to yield a linear controller. Both the weights in the quadratic penalty function and the locations of sensors and actuators are selected by successive approximations to obtain an optimal design which satisfies the input/output variance constraints. The method is applied to NASA's 64 meter Hoop-Column Space Antenna for satellite communications. In addition the solution for the control law, the main feature of these results is the systematic determination of actuator design requirements which allow the given input/output performance constraints to be satisfied.
Components of genetic variance for plant survival and vigor of apple trees.
Watkins, R; Spangelo, L P
1970-01-01
The additive and non-additive variance components were estimated from progenies derived from two samples of parents (representing a northern continental type climate) for five factors relating to plant survival and two composites of the factors. It was found that additive variance made up 90 and 100%, 91 and 100%, 91 and 100%, 100 and 100%, 82 and 59%, 91 and 100%, and 90 and 100% of the total genetic variance for leafing-out date, leafingout percent, tip injury, stem damage, root damage, a shoot composite, and a shoot-root composite for the two samples respectively. A third sample had 100% additive variance for plant height while, in contrast, a sample of rootstocks, differing from each other in their ability to dwarf grafted scions, had approximately 50-70% additive variance for plant height. It was shown that breeding progress for both winter survival and plant height could be achieved by exploiting the additive variance, the total genetic variance, or (where progenies were the selection unit rather than individuals) by progeny selection. By exploiting the additive variance, it should be possible to improve plant survival and change plant height in each of several successive generations. It is predicted that (with the exception of selection for vigor in a population having a range of dwarfing abilities) potential parents could be efficiently screened phenotypically and so obviate the need for genotypic evaluation. A total of 9180 progeny trees were involved in the analyses considered in this paper. PMID:24435802
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Variances. 654.402 Section 654.402 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR SPECIAL RESPONSIBILITIES OF THE EMPLOYMENT SERVICE SYSTEM Housing for Agricultural Workers Purpose and Applicability § 654.402 Variances. (a) An employer may apply for a...
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Variances. 654.402 Section 654.402 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR SPECIAL RESPONSIBILITIES OF THE EMPLOYMENT SERVICE SYSTEM Housing for Agricultural Workers Purpose and Applicability § 654.402 Variances....
40 CFR 142.41 - Variance request.
Code of Federal Regulations, 2012 CFR
2012-07-01
... nature and duration of variance requested. (b) Relevant analytical results of water quality sampling of... relevant to ability to comply. (3) Analytical results of raw water quality relevant to the variance request... request made under § 142.40(b), a statement that the system will perform monitoring and other...
40 CFR 142.41 - Variance request.
Code of Federal Regulations, 2013 CFR
2013-07-01
... nature and duration of variance requested. (b) Relevant analytical results of water quality sampling of... relevant to ability to comply. (3) Analytical results of raw water quality relevant to the variance request... request made under § 142.40(b), a statement that the system will perform monitoring and other...
40 CFR 142.41 - Variance request.
Code of Federal Regulations, 2014 CFR
2014-07-01
... nature and duration of variance requested. (b) Relevant analytical results of water quality sampling of... relevant to ability to comply. (3) Analytical results of raw water quality relevant to the variance request... request made under § 142.40(b), a statement that the system will perform monitoring and other...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Variances. 1010.4 Section 1010.4 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) RADIOLOGICAL HEALTH... and Radiological Health, Food and Drug Administration, may grant a variance from one or...
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Variances. 1010.4 Section 1010.4 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) RADIOLOGICAL HEALTH... and Radiological Health, Food and Drug Administration, may grant a variance from one or...
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Variances. 1010.4 Section 1010.4 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) RADIOLOGICAL HEALTH... and Radiological Health, Food and Drug Administration, may grant a variance from one or...
On Some Representations of Sample Variance
ERIC Educational Resources Information Center
Joarder, Anwar H.
2002-01-01
The usual formula for variance depending on rounding off the sample mean lacks precision, especially when computer programs are used for the calculation. The well-known simplification of the total sums of squares does not always give benefit. Since the variance of two observations is easily calculated without the use of a sample mean, and the…
Code of Federal Regulations, 2010 CFR
2010-01-01
... Procedures § 1021.343 Variances. (a) Emergency actions. DOE may take an action without observing all provisions of this part or the CEQ Regulations, in accordance with 40 CFR 1506.11, in emergency situations... 10 Energy 4 2010-01-01 2010-01-01 false Variances. 1021.343 Section 1021.343 Energy DEPARTMENT...
Code of Federal Regulations, 2010 CFR
2010-04-01
... 18 Conservation of Power and Water Resources 2 2010-04-01 2010-04-01 false Variances. 1304.408 Section 1304.408 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY APPROVAL OF... § 1304.408 Variances. The Vice President or the designee thereof is authorized, following...
Measurement of Allan variance and phase noise at fractions of a millihertz
NASA Technical Reports Server (NTRS)
Conroy, Bruce L.; Le, Duc
1990-01-01
Although the measurement of Allan variance of oscillators is well documented, there is a need for a simplified system for finding the degradation of phase noise and Allan variance step-by-step through a system. This article describes an instrumentation system for simultaneous measurement of additive phase noise and degradation in Allan variance through a transmitter system. Also included are measurements of a 20-kW X-band transmitter showing the effect of adding a pass tube regulator.
Portfolio optimization with mean-variance model
NASA Astrophysics Data System (ADS)
Hoe, Lam Weng; Siew, Lam Weng
2016-06-01
Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.
Component Processes in Reading: Shared and Unique Variance in Serial and Isolated Naming Speed
ERIC Educational Resources Information Center
Logan, Jessica A. R.; Schatschneider, Christopher
2014-01-01
Reading ability is comprised of several component processes. In particular, the connection between the visual and verbal systems has been demonstrated to play an important role in the reading process. The present study provides a review of the existing literature on the visual verbal connection as measured by two tasks, rapid serial naming and…
ERIC Educational Resources Information Center
Brotheridge, Celeste M.; Power, Jacqueline L.
2008-01-01
Purpose: This study seeks to examine the extent to which the use of career center services results in the significant incremental prediction of career outcomes beyond its established predictors. Design/methodology/approach: The authors survey the clients of a public agency's career center and use hierarchical multiple regressions in order to…
Portfolio optimization using median-variance approach
NASA Astrophysics Data System (ADS)
Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli
2013-04-01
Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.
Automatic variance analysis of multistage care pathways.
Li, Xiang; Liu, Haifeng; Zhang, Shilei; Mei, Jing; Xie, Guotong; Yu, Yiqin; Li, Jing; Lakshmanan, Geetika T
2014-01-01
A care pathway (CP) is a standardized process that consists of multiple care stages, clinical activities and their relations, aimed at ensuring and enhancing the quality of care. However, actual care may deviate from the planned CP, and analysis of these deviations can help clinicians refine the CP and reduce medical errors. In this paper, we propose a CP variance analysis method to automatically identify the deviations between actual patient traces in electronic medical records (EMR) and a multistage CP. As the care stage information is usually unavailable in EMR, we first align every trace with the CP using a hidden Markov model. From the aligned traces, we report three types of deviations for every care stage: additional activities, absent activities and violated constraints, which are identified by using the techniques of temporal logic and binomial tests. The method has been applied to a CP for the management of congestive heart failure and real world EMR, providing meaningful evidence for the further improvement of care quality. PMID:25160280
On Studying Common Factor Variance in Multiple-Component Measuring Instruments
ERIC Educational Resources Information Center
Raykov, Tenko; Pohl, Steffi
2013-01-01
A method for examining common factor variance in multiple-component measuring instruments is outlined. The procedure is based on an application of the latent variable modeling methodology and is concerned with evaluating observed variance explained by a global factor and by one or more additional component-specific factors. The approach furnishes…
Heterogeneity of variances for carcass traits by percentage Brahman inheritance.
Crews, D H; Franke, D E
1998-07-01
Heterogeneity of carcass trait variances due to level of Brahman inheritance was investigated using records from straightbred and crossbred steers produced from 1970 to 1988 (n = 1,530). Angus, Brahman, Charolais, and Hereford sires were mated to straightbred and crossbred cows to produce straightbred, F1, back-cross, three-breed cross, and two-, three-, and four-breed rotational crossbred steers in four non-overlapping generations. At weaning (mean age = 220 d), steers were randomly assigned within breed group directly to the feedlot for 200 d, or to a backgrounding and stocker phase before feeding. Stocker steers were fed from 70 to 100 d in generations 1 and 2 and from 60 to 120 d in generations 3 and 4. Carcass traits included hot carcass weight, subcutaneous fat thickness and longissimus muscle area at the 12-13th rib interface, carcass weight-adjusted longissimus muscle area, USDA yield grade, estimated total lean yield, marbling score, and Warner-Bratzler shear force. Steers were classified as either high Brahman (50 to 100% Brahman), moderate Brahman (25 to 49% Brahman), or low Brahman (0 to 24% Brahman) inheritance. Two types of animal models were fit with regard to level of Brahman inheritance. One model assumed similar variances between pairs of Brahman inheritance groups, and the second model assumed different variances between pairs of Brahman inheritance groups. Fixed sources of variation in both models included direct and maternal additive and nonadditive breed effects, year of birth, and slaughter age. Variances were estimated using derivative free REML procedures. Likelihood ratio tests were used to compare models. The model accounting for heterogeneous variances had a greater likelihood (P < .001) than the model assuming homogeneous variances for hot carcass weight, longissimus muscle area, weight-adjusted longissimus muscle area, total lean yield, and Warner-Bratzler shear force, indicating improved fit with percentage Brahman inheritance
Code of Federal Regulations, 2011 CFR
2011-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2014 CFR
2014-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2010 CFR
2010-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2013 CFR
2013-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2012 CFR
2012-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Reducing variance in batch partitioning measurements
Mariner, Paul E.
2010-08-11
The partitioning experiment is commonly performed with little or no attention to reducing measurement variance. Batch test procedures such as those used to measure K{sub d} values (e.g., ASTM D 4646 and EPA402 -R-99-004A) do not explain how to evaluate measurement uncertainty nor how to minimize measurement variance. In fact, ASTM D 4646 prescribes a sorbent:water ratio that prevents variance minimization. Consequently, the variance of a set of partitioning measurements can be extreme and even absurd. Such data sets, which are commonplace, hamper probabilistic modeling efforts. An error-savvy design requires adjustment of the solution:sorbent ratio so that approximately half of the sorbate partitions to the sorbent. Results of Monte Carlo simulations indicate that this simple step can markedly improve the precision and statistical characterization of partitioning uncertainty.
Variance anisotropy in compressible 3-D MHD
NASA Astrophysics Data System (ADS)
Oughton, S.; Matthaeus, W. H.; Wan, Minping; Parashar, Tulasi
2016-06-01
We employ spectral method numerical simulations to examine the dynamical development of anisotropy of the variance, or polarization, of the magnetic and velocity field in compressible magnetohydrodynamic (MHD) turbulence. Both variance anisotropy and spectral anisotropy emerge under influence of a large-scale mean magnetic field B0; these are distinct effects, although sometimes related. Here we examine the appearance of variance parallel to B0, when starting from a highly anisotropic state. The discussion is based on a turbulence theoretic approach rather than a wave perspective. We find that parallel variance emerges over several characteristic nonlinear times, often attaining a quasi-steady level that depends on plasma beta. Consistency with solar wind observations seems to occur when the initial state is dominated by quasi-two-dimensional fluctuations.
Another Line for the Analysis of Variance
ERIC Educational Resources Information Center
Brown, Bruce L.; Harshbarger, Thad R.
1976-01-01
A test is developed for hypotheses about the grand mean in the analysis of variance, using the known relationship between the t distribution and the F distribution with 1 df (degree of freedom) for the numerator. (Author/RC)
Nonorthogonal Analysis of Variance Programs: An Evaluation.
ERIC Educational Resources Information Center
Hosking, James D.; Hamer, Robert M.
1979-01-01
Six computer programs for four methods of nonorthogonal analysis of variance are compared for capabilities, accuracy, cost, transportability, quality of documentation, associated computational capabilities, and ease of use: OSIRIS; SAS; SPSS; MANOVA; BMDP2V; and MULTIVARIANCE. (CTM)
Automated variance reduction for Monte Carlo shielding analyses with MCNP
NASA Astrophysics Data System (ADS)
Radulescu, Georgeta
Variance reduction techniques are employed in Monte Carlo analyses to increase the number of particles in the space phase of interest and thereby lower the variance of statistical estimation. Variance reduction parameters are required to perform Monte Carlo calculations. It is well known that adjoint solutions, even approximate ones, are excellent biasing functions that can significantly increase the efficiency of a Monte Carlo calculation. In this study, an automated method of generating Monte Carlo variance reduction parameters, and of implementing the source energy biasing and the weight window technique in MCNP shielding calculations has been developed. The method is based on the approach used in the SAS4 module of the SCALE code system, which derives the biasing parameters from an adjoint one-dimensional Discrete Ordinates calculation. Unlike SAS4 that determines the radial and axial dose rates of a spent fuel cask in separate calculations, the present method provides energy and spatial biasing parameters for the entire system that optimize the simulation of particle transport towards all external surfaces of a spent fuel cask. The energy and spatial biasing parameters are synthesized from the adjoint fluxes of three one-dimensional Discrete Ordinates adjoint calculations. Additionally, the present method accommodates multiple source regions, such as the photon sources in light-water reactor spent nuclear fuel assemblies, in one calculation. With this automated method, detailed and accurate dose rate maps for photons, neutrons, and secondary photons outside spent fuel casks or other containers can be efficiently determined with minimal efforts.
Variational bayesian method of estimating variance components.
Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi
2016-07-01
We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling. PMID:26877207
Analysis and application of minimum variance discrete time system identification
NASA Technical Reports Server (NTRS)
Kaufman, H.; Kotob, S.
1975-01-01
An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.
GR uniqueness and deformations
NASA Astrophysics Data System (ADS)
Krasnov, Kirill
2015-10-01
In the metric formulation gravitons are described with the parity symmetric S + 2 ⊗ S - 2 representation of Lorentz group. General Relativity is then the unique theory of interacting gravitons with second order field equations. We show that if a chiral S + 3 ⊗ S - representation is used instead, the uniqueness is lost, and there is an infinite-parametric family of theories of interacting gravitons with second order field equations. We use the language of graviton scattering amplitudes, and show how the uniqueness of GR is avoided using simple dimensional analysis. The resulting distinct from GR gravity theories are all parity asymmetric, but share the GR MHV amplitudes. They have new all same helicity graviton scattering amplitudes at every graviton order. The amplitudes with at least one graviton of opposite helicity continue to be determinable by the BCFW recursion.
Schumpe, Birga Mareen; Erb, Hans-Peter
2015-01-01
A defining force in the shaping of human identity is a person's need to feel special and different from others. Psychologists term this motivation Need for Uniqueness (NfU). There are manifold ways to establish feelings of uniqueness, e.g., by showing unusual consumption behaviour or by not conforming to majority views. The NfU can be seen as a stable personality trait, that is, individuals differ in their dispositional need to feel unique. The NfU is also influenced by situational factors and social environments. The cultural context is one important social setting shaping the NfU. This article aims to illuminate the NfU from a social psychological perspective. PMID:25942772
Functional Analysis of Variance for Association Studies
Vsevolozhskaya, Olga A.; Zaykin, Dmitri V.; Greenwood, Mark C.; Wei, Changshuai; Lu, Qing
2014-01-01
While progress has been made in identifying common genetic variants associated with human diseases, for most of common complex diseases, the identified genetic variants only account for a small proportion of heritability. Challenges remain in finding additional unknown genetic variants predisposing to complex diseases. With the advance in next-generation sequencing technologies, sequencing studies have become commonplace in genetic research. The ongoing exome-sequencing and whole-genome-sequencing studies generate a massive amount of sequencing variants and allow researchers to comprehensively investigate their role in human diseases. The discovery of new disease-associated variants can be enhanced by utilizing powerful and computationally efficient statistical methods. In this paper, we propose a functional analysis of variance (FANOVA) method for testing an association of sequence variants in a genomic region with a qualitative trait. The FANOVA has a number of advantages: (1) it tests for a joint effect of gene variants, including both common and rare; (2) it fully utilizes linkage disequilibrium and genetic position information; and (3) allows for either protective or risk-increasing causal variants. Through simulations, we show that FANOVA outperform two popularly used methods – SKAT and a previously proposed method based on functional linear models (FLM), – especially if a sample size of a study is small and/or sequence variants have low to moderate effects. We conduct an empirical study by applying three methods (FANOVA, SKAT and FLM) to sequencing data from Dallas Heart Study. While SKAT and FLM respectively detected ANGPTL 4 and ANGPTL 3 associated with obesity, FANOVA was able to identify both genes associated with obesity. PMID:25244256
Creativity and technical innovation: spatial ability's unique role.
Kell, Harrison J; Lubinski, David; Benbow, Camilla P; Steiger, James H
2013-09-01
In the late 1970s, 563 intellectually talented 13-year-olds (identified by the SAT as in the top 0.5% of ability) were assessed on spatial ability. More than 30 years later, the present study evaluated whether spatial ability provided incremental validity (beyond the SAT's mathematical and verbal reasoning subtests) for differentially predicting which of these individuals had patents and three classes of refereed publications. A two-step discriminant-function analysis revealed that the SAT subtests jointly accounted for 10.8% of the variance among these outcomes (p < .01); when spatial ability was added, an additional 7.6% was accounted for--a statistically significant increase (p < .01). The findings indicate that spatial ability has a unique role in the development of creativity, beyond the roles played by the abilities traditionally measured in educational selection, counseling, and industrial-organizational psychology. Spatial ability plays a key and unique role in structuring many important psychological phenomena and should be examined more broadly across the applied and basic psychological sciences. PMID:23846718
Discrimination of frequency variance for tonal sequencesa)
Byrne, Andrew J.; Viemeister, Neal F.; Stellmack, Mark A.
2014-01-01
Real-world auditory stimuli are highly variable across occurrences and sources. The present study examined the sensitivity of human listeners to differences in global stimulus variability. In a two-interval, forced-choice task, variance discrimination was measured using sequences of five 100-ms tone pulses. The frequency of each pulse was sampled randomly from a distribution that was Gaussian in logarithmic frequency. In the non-signal interval, the sampled distribution had a variance of σSTAN2, while in the signal interval, the variance of the sequence was σSIG2 (with σSIG2 > σSTAN2). The listener's task was to choose the interval with the larger variance. To constrain possible decision strategies, the mean frequency of the sampling distribution of each interval was randomly chosen for each presentation. Psychometric functions were measured for various values of σSTAN2. Although the performance was remarkably similar across listeners, overall performance was poorer than that of an ideal observer (IO) which perfectly compares interval variances. However, like the IO, Weber's Law behavior was observed, with a constant ratio of (σSIG2-σSTAN2) to σSTAN2 yielding similar performance. A model which degraded the IO with a frequency-resolution noise and a computational noise provided a reasonable fit to the real data. PMID:25480064
Relational mate value: consensus and uniqueness in romantic evaluations.
Eastwick, Paul W; Hunt, Lucy L
2014-05-01
Classic evolutionary and social exchange perspectives suggest that some people have more mate value than others because they possess desirable traits (e.g., attractiveness, status) that are intrinsic to the individual. This article broadens mate value in 2 ways to incorporate relational perspectives. First, close relationships research suggests an alternative measure of mate value: whether someone can provide a high quality relationship. Second, person perception research suggests that both trait-based and relationship quality measures of mate value should contain a mixture of target variance (i.e., consensus about targets, the classic conceptualization) and relationship variance (i.e., unique ratings of targets). In Study 1, participants described their personal conceptions of mate value and revealed themes consistent with classic and relational approaches. Study 2 used a social relations model blocked design to assess target and relationship variances in participants' romantic evaluations of opposite-sex classmates at the beginning and end of the semester. In Study 3, a one-with-many design documented target and relationship variances among long-term opposite-sex acquaintances. Results generally revealed more relationship variance than target variance; participants' romantic evaluations were more likely to be unique to a particular person rather than consensual. Furthermore, the relative dominance of relationship to target variance was stronger for relational measures of mate value (i.e., relationship quality projections) than classic trait-based measures (i.e., attractiveness, resources). Finally, consensus decreased as participants got to know one another better, and long-term acquaintances in Study 3 revealed enormous amounts of relationship variance. Implications for the evolutionary, close relationships, and person-perception literatures are discussed. PMID:24611897
Retief, François Pieter; Cilliers, Louise
2011-09-01
Akhenaten was a unique pharaoh in more ways than one. He initiated a major socio-religious revolution that had vast consequences for his country, and possessed a strikingly abnormal physiognomy that was of note in his time and has interested historians up to the present era. In this study, we attempt to identify the developmental disorder responsible for his eunuchoid appearance. PMID:21920162
ERIC Educational Resources Information Center
Goble, Don
2009-01-01
This article describes the many learning opportunities that broadcast technology students at Ladue Horton Watkins High School in St. Louis, Missouri, experience because of their unique access to technology and methods of learning. Through scaffolding, stepladder techniques, and trial by fire, students learn to produce multiple television programs,…
Cross-bispectrum computation and variance estimation
NASA Technical Reports Server (NTRS)
Lii, K. S.; Helland, K. N.
1981-01-01
A method for the estimation of cross-bispectra of discrete real time series is developed. The asymptotic variance properties of the bispectrum are reviewed, and a method for the direct estimation of bispectral variance is given. The symmetry properties are described which minimize the computations necessary to obtain a complete estimate of the cross-bispectrum in the right-half-plane. A procedure is given for computing the cross-bispectrum by subdividing the domain into rectangular averaging regions which help reduce the variance of the estimates and allow easy application of the symmetry relationships to minimize the computational effort. As an example of the procedure, the cross-bispectrum of a numerically generated, exponentially distributed time series is computed and compared with theory.
Inhomogeneity-induced variance of cosmological parameters
NASA Astrophysics Data System (ADS)
Wiegand, A.; Schwarz, D. J.
2012-02-01
Context. Modern cosmology relies on the assumption of large-scale isotropy and homogeneity of the Universe. However, locally the Universe is inhomogeneous and anisotropic. This raises the question of how local measurements (at the ~102 Mpc scale) can be used to determine the global cosmological parameters (defined at the ~104 Mpc scale)? Aims: We connect the questions of cosmological backreaction, cosmic averaging and the estimation of cosmological parameters and show how they relate to the problem of cosmic variance. Methods: We used Buchert's averaging formalism and determined a set of locally averaged cosmological parameters in the context of the flat Λ cold dark matter model. We calculated their ensemble means (i.e. their global value) and variances (i.e. their cosmic variance). We applied our results to typical survey geometries and focused on the study of the effects of local fluctuations of the curvature parameter. Results: We show that in the context of standard cosmology at large scales (larger than the homogeneity scale and in the linear regime), the question of cosmological backreaction and averaging can be reformulated as the question of cosmic variance. The cosmic variance is found to be highest in the curvature parameter. We propose to use the observed variance of cosmological parameters to measure the growth factor. Conclusions: Cosmological backreaction and averaging are real effects that have been measured already for a long time, e.g. by the fluctuations of the matter density contrast averaged over spheres of a certain radius. Backreaction and averaging effects from scales in the linear regime, as considered in this work, are shown to be important for the precise measurement of cosmological parameters.
Large-scale magnetic variances near the South Solar Pole
NASA Technical Reports Server (NTRS)
Jokipii, J. R.; Kota, J.; Smith, E.; Horbury, T.; Giacalone, J.
1995-01-01
We summarize recent Ulysses observations of the variances over large temporal scales in the interplanetary magnetic field components and their increase as Ulysses approached the South Solar Pole. A model of these fluctuations is shown to provide a very good fit to the observed amplitude and temporal variation of the fluctuations. In addition, the model predicts that the transport of cosmic rays in the heliosphere will be significantly altered by this level of fluctuations. In addition to altering the inward diffusion and drift access of cosmic rays over the solar poles, we find that the magnetic fluctuations also imply a large latitudinal diffusion, caused primarily by the associated field-line random walk.
ERIC Educational Resources Information Center
Yetkiner, Zeynep Ebrar
2009-01-01
Commonality analysis is a method of partitioning variance to determine the predictive ability unique to each predictor (or predictor set) and common to two or more of the predictors (or predictor sets). The purposes of the present paper are to (a) explain commonality analysis in a multiple regression context as an alternative for middle grades…
Wave propagation analysis using the variance matrix.
Sharma, Richa; Ivan, J Solomon; Narayanamurthy, C S
2014-10-01
The propagation of a coherent laser wave-field through a pseudo-random phase plate is studied using the variance matrix estimated from Shack-Hartmann wavefront sensor data. The uncertainty principle is used as a tool in discriminating the data obtained from the Shack-Hartmann wavefront sensor. Quantities of physical interest such as the twist parameter, and the symplectic eigenvalues, are estimated from the wavefront sensor measurements. A distance measure between two variance matrices is introduced and used to estimate the spatial asymmetry of a wave-field in the experiment. The estimated quantities are then used to compare a distorted wave-field with its undistorted counterpart. PMID:25401243
Variance in binary stellar population synthesis
NASA Astrophysics Data System (ADS)
Breivik, Katelyn; Larson, Shane L.
2016-03-01
In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.
Variance Reduction Using Nonreversible Langevin Samplers
NASA Astrophysics Data System (ADS)
Duncan, A. B.; Lelièvre, T.; Pavliotis, G. A.
2016-05-01
A standard approach to computing expectations with respect to a given target measure is to introduce an overdamped Langevin equation which is reversible with respect to the target distribution, and to approximate the expectation by a time-averaging estimator. As has been noted in recent papers [30, 37, 61, 72], introducing an appropriately chosen nonreversible component to the dynamics is beneficial, both in terms of reducing the asymptotic variance and of speeding up convergence to the target distribution. In this paper we present a detailed study of the dependence of the asymptotic variance on the deviation from reversibility. Our theoretical findings are supported by numerical simulations.
A new variance-based global sensitivity analysis technique
NASA Astrophysics Data System (ADS)
Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen
2013-11-01
A new set of variance-based sensitivity indices, called W-indices, is proposed. Similar to the Sobol's indices, both main and total effect indices are defined. The W-main effect indices measure the average reduction of model output variance when the ranges of a set of inputs are reduced, and the total effect indices quantify the average residual variance when the ranges of the remaining inputs are reduced. Geometrical interpretations show that the W-indices gather the full information of the variance ratio function, whereas, Sobol's indices only reflect the marginal information. Then the double-loop-repeated-set Monte Carlo (MC) (denoted as DLRS MC) procedure, the double-loop-single-set MC (denoted as DLSS MC) procedure and the model emulation procedure are introduced for estimating the W-indices. It is shown that the DLRS MC procedure is suitable for computing all the W-indices despite its highly computational cost. The DLSS MC procedure is computationally efficient, however, it is only applicable for computing low order indices. The model emulation is able to estimate all the W-indices with low computational cost as long as the model behavior is correctly captured by the emulator. The Ishigami function, a modified Sobol's function and two engineering models are utilized for comparing the W- and Sobol's indices and verifying the efficiency and convergence of the three numerical methods. Results show that, for even an additive model, the W-total effect index of one input may be significantly larger than its W-main effect index. This indicates that there may exist interaction effects among the inputs of an additive model when their distribution ranges are reduced.
A Simple Algorithm for Approximating Confidence on the Modified Allan Variance and the Time Variance
NASA Technical Reports Server (NTRS)
Weiss, Marc A.; Greenhall, Charles A.
1996-01-01
An approximating algorithm for computing equvalent degrees of freedom of the Modified Allan Variance and its square root, the Modified Allan Deviation (MVAR and MDEV), and the Time Variance and Time Deviation (TVAR and TDEV) is presented, along with an algorithm for approximating the inverse chi-square distribution.
Testing Variances in Psychological and Educational Research.
ERIC Educational Resources Information Center
Ramsey, Philip H.
1994-01-01
A review of the literature indicates that the two best procedures for testing variances are one that was proposed by O'Brien (1981) and another that was proposed by Brown and Forsythe (1974). An examination of these procedures for a variety of populations confirms their robustness and indicates how optimal power can usually be obtained. (SLD)
Code of Federal Regulations, 2010 CFR
2010-01-01
... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2013 CFR
2013-01-01
... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2014 CFR
2014-01-01
... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2012 CFR
2012-01-01
... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2011 CFR
2011-01-01
... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Variance Reduction for a Discrete Velocity Gas
NASA Astrophysics Data System (ADS)
Morris, A. B.; Varghese, P. L.; Goldstein, D. B.
2011-05-01
We extend a variance reduction technique developed by Baker and Hadjiconstantinou [1] to a discrete velocity gas. In our previous work, the collision integral was evaluated by importance sampling of collision partners [2]. Significant computational effort may be wasted by evaluating the collision integral in regions where the flow is in equilibrium. In the current approach, substantial computational savings are obtained by only solving for the deviations from equilibrium. In the near continuum regime, the deviations from equilibrium are small and low noise evaluation of the collision integral can be achieved with very coarse statistical sampling. Spatially homogenous relaxation of the Bobylev-Krook-Wu distribution [3,4], was used as a test case to verify that the method predicts the correct evolution of a highly non-equilibrium distribution to equilibrium. When variance reduction is not used, the noise causes the entropy to undershoot, but the method with variance reduction matches the analytic curve for the same number of collisions. We then extend the work to travelling shock waves and compare the accuracy and computational savings of the variance reduction method to DSMC over Mach numbers ranging from 1.2 to 10.
Multiple Comparison Procedures when Population Variances Differ.
ERIC Educational Resources Information Center
Olejnik, Stephen; Lee, JaeShin
A review of the literature on multiple comparison procedures suggests several alternative approaches for comparing means when population variances differ. These include: (1) the approach of P. A. Games and J. F. Howell (1976); (2) C. W. Dunnett's C confidence interval (1980); and (3) Dunnett's T3 solution (1980). These procedures control the…
Variance Anisotropy of Solar Wind fluctuations
NASA Astrophysics Data System (ADS)
Oughton, S.; Matthaeus, W. H.; Wan, M.; Osman, K.
2013-12-01
Solar wind observations at MHD scales indicate that the energy associated with velocity and magnetic field fluctuations transverse to the mean magnetic field is typically much larger than that associated with parallel fluctuations [eg, 1]. This is often referred to as variance anisotropy. Various explanations for it have been suggested, including that the fluctuations are predominantly shear Alfven waves [1] and that turbulent dynamics leads to such states [eg, 2]. Here we investigate the origin and strength of such variance anisotropies, using spectral method simulations of the compressible (polytropic) 3D MHD equations. We report on results from runs with initial conditions that are either (i) broadband turbulence or (ii) fluctuations polarized in the same sense as shear Alfven waves. The dependence of the variance anisotropy on the plasma beta and Mach number is examined [3], along with the timescale for any variance anisotropy to develop. Implications for solar wind fluctuations will be discussed. References: [1] Belcher, J. W. and Davis Jr., L. (1971), J. Geophys. Res., 76, 3534. [2] Matthaeus, W. H., Ghosh, S., Oughton, S. and Roberts, D. A. (1996), J. Geophys. Res., 101, 7619. [3] Smith, C. W., B. J. Vasquez and K. Hamilton (2006), J. Geophys. Res., 111, A09111.
Comparing the Variances of Two Dependent Groups.
ERIC Educational Resources Information Center
Wilcox, Rand R.
1990-01-01
Recently, C. E. McCulloch (1987) suggested a modification of the Morgan-Pitman test for comparing the variances of two dependent groups. This paper demonstrates that there are situations where the procedure is not robust. A subsample approach, similar to the Box-Scheffe test, and the Sandvik-Olsson procedure are also assessed. (TJH)
Formative Use of Intuitive Analysis of Variance
ERIC Educational Resources Information Center
Trumpower, David L.
2013-01-01
Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In…
78 FR 14122 - Revocation of Permanent Variances
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-04
... OSHA's scaffolds standards for construction (77 FR 46948). Today's notice revoking the variances takes... Safety and Health Act of 1970 (OSH Act; 29 U.S.C. 651, 655) in 1971 (see 36 FR 7340). Paragraphs (a)(4..., construction, and use of scaffolds (61 FR 46026). In the preamble to the final rule, OSHA stated that it...
7 CFR 205.290 - Temporary variances.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Temporary variances. 205.290 Section 205.290 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) ORGANIC FOODS PRODUCTION ACT PROVISIONS NATIONAL ORGANIC PROGRAM...
Code of Federal Regulations, 2012 CFR
2012-04-01
... 18 Conservation of Power and Water Resources 2 2012-04-01 2012-04-01 false Variances. 1304.408 Section 1304.408 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY APPROVAL OF CONSTRUCTION IN THE TENNESSEE RIVER SYSTEM AND REGULATION OF STRUCTURES AND OTHER ALTERATIONS...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 18 Conservation of Power and Water Resources 2 2011-04-01 2011-04-01 false Variances. 1304.408 Section 1304.408 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY APPROVAL OF CONSTRUCTION IN THE TENNESSEE RIVER SYSTEM AND REGULATION OF STRUCTURES AND OTHER ALTERATIONS...
Genetic and environmental heterogeneity of residual variance of weight traits in Nellore beef cattle
2012-01-01
Background Many studies have provided evidence of the existence of genetic heterogeneity of environmental variance, suggesting that it could be exploited to improve robustness and uniformity of livestock by selection. However, little is known about the perspectives of such a selection strategy in beef cattle. Methods A two-step approach was applied to study the genetic heterogeneity of residual variance of weight gain from birth to weaning and long-yearling weight in a Nellore beef cattle population. First, an animal model was fitted to the data and second, the influence of additive and environmental effects on the residual variance of these traits was investigated with different models, in which the log squared estimated residuals for each phenotypic record were analyzed using the restricted maximum likelihood method. Monte Carlo simulation was performed to assess the reliability of variance component estimates from the second step and the accuracy of estimated breeding values for residual variation. Results The results suggest that both genetic and environmental factors have an effect on the residual variance of weight gain from birth to weaning and long-yearling in Nellore beef cattle and that uniformity of these traits could be improved by selecting for lower residual variance, when considering a large amount of information to predict genetic merit for this criterion. Simulations suggested that using the two-step approach would lead to biased estimates of variance components, such that more adequate methods are needed to study the genetic heterogeneity of residual variance in beef cattle. PMID:22672564
The Column Density Variance-{\\cal M}_s Relationship
NASA Astrophysics Data System (ADS)
Burkhart, Blakesley; Lazarian, A.
2012-08-01
Although there is a wealth of column density tracers for both the molecular and diffuse interstellar medium, there are few observational studies investigating the relationship between the density variance (σ2) and the sonic Mach number ({\\cal M}_s). This is in part due to the fact that the σ2-{\\cal M}_s relationship is derived, via MHD simulations, for the three-dimensional (3D) density variance only, which is not a direct observable. We investigate the utility of a 2D column density \\sigma _{\\Sigma /\\Sigma _0}^2-{\\cal M}_s relationship using solenoidally driven isothermal MHD simulations and find that the best fit follows closely the form of the 3D density \\sigma _{\\rho /\\rho _0}^2-{\\cal M}_s trend but includes a scaling parameter A such that \\sigma _{\\ln (\\Sigma /\\Sigma _0)}^2=A\\times \\ln (1+b^2{\\cal M}_s^2), where A = 0.11 and b = 1/3. This relation is consistent with the observational data reported for the Taurus and IC 5146 molecular clouds with b = 0.5 and A = 0.16, and b = 0.5 and A = 0.12, respectively. These results open up the possibility of using the 2D column density values of σ2 for investigations of the relation between the sonic Mach number and the probability distribution function (PDF) variance in addition to existing PDF sonic Mach number relations.
R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization
Dazard, Jean-Eudes; Xu, Hua; Rao, J. Sunil
2015-01-01
We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets (p ≫ n paradigm), such as in ‘omics’-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real ‘omics’ test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR (‘Mean-Variance Regularization’), downloadable from the CRAN. PMID:26819572
Cosmic variance in inflation with two light scalars
NASA Astrophysics Data System (ADS)
Bonga, Béatrice; Brahma, Suddhasattwa; Deutsch, Anne-Sylvie; Shandera, Sarah
2016-05-01
We examine the squeezed limit of the bispectrum when a light scalar with arbitrary non-derivative self-interactions is coupled to the inflaton. We find that when the hidden sector scalar is sufficiently light (m lesssim 0.1 H), the coupling between long and short wavelength modes from the series of higher order correlation functions (from arbitrary order contact diagrams) causes the statistics of the fluctuations to vary in sub-volumes. This means that observations of primordial non-Gaussianity cannot be used to uniquely reconstruct the potential of the hidden field. However, the local bispectrum induced by mode-coupling from these diagrams always has the same squeezed limit, so the field's locally determined mass is not affected by this cosmic variance.
Abel, David L.
2011-01-01
Is life physicochemically unique? No. Is life unique? Yes. Life manifests innumerable formalisms that cannot be generated or explained by physicodynamics alone. Life pursues thousands of biofunctional goals, not the least of which is staying alive. Neither physicodynamics, nor evolution, pursue goals. Life is largely directed by linear digital programming and by the Prescriptive Information (PI) instantiated particularly into physicodynamically indeterminate nucleotide sequencing. Epigenomic controls only compound the sophistication of these formalisms. Life employs representationalism through the use of symbol systems. Life manifests autonomy, homeostasis far from equilibrium in the harshest of environments, positive and negative feedback mechanisms, prevention and correction of its own errors, and organization of its components into Sustained Functional Systems (SFS). Chance and necessity—heat agitation and the cause-and-effect determinism of nature’s orderliness—cannot spawn formalisms such as mathematics, language, symbol systems, coding, decoding, logic, organization (not to be confused with mere self-ordering), integration of circuits, computational success, and the pursuit of functionality. All of these characteristics of life are formal, not physical. PMID:25382119
42 CFR 456.525 - Request for renewal of variance.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 4 2010-10-01 2010-10-01 false Request for renewal of variance. 456.525 Section..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time Requirements § 456.525 Request for renewal of variance. (a) The agency must submit a request for renewal of...
10 CFR 851.32 - Action on variance requests.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Action on variance requests. 851.32 Section 851.32 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.32 Action on variance requests. (a... approval of a variance application, the Chief Health, Safety and Security Officer must forward to the...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Variances. 50-204.1a... and Application § 50-204.1a Variances. (a) Variances from standards in this part may be granted in the same circumstances in which variances may be granted under sections 6(b)(6)(A) or 6(d) of the...
21 CFR 898.14 - Exemptions and variances.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Exemptions and variances. 898.14 Section 898.14... variances. (a) A request for an exemption or variance shall be submitted in the form of a petition under... with the device; and (4) Other information justifying the exemption or variance. (b) An exemption...
10 CFR 851.30 - Consideration of variances.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Consideration of variances. 851.30 Section 851.30 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.30 Consideration of variances. (a) Variances shall be granted by the Under Secretary after considering the recommendation of the Chief...
42 CFR 456.521 - Conditions for granting variance requests.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 4 2010-10-01 2010-10-01 false Conditions for granting variance requests. 456.521..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time Requirements § 456.521 Conditions for granting variance requests. (a) Except as described under paragraph...
Mixed emotions: Sensitivity to facial variance in a crowd of faces.
Haberman, Jason; Lee, Pegan; Whitney, David
2015-01-01
The visual system automatically represents summary information from crowds of faces, such as the average expression. This is a useful heuristic insofar as it provides critical information about the state of the world, not simply information about the state of one individual. However, the average alone is not sufficient for making decisions about how to respond to a crowd. The variance or heterogeneity of the crowd--the mixture of emotions--conveys information about the reliability of the average, essential for determining whether the average can be trusted. Despite its importance, the representation of variance within a crowd of faces has yet to be examined. This is addressed here in three experiments. In the first experiment, observers viewed a sample set of faces that varied in emotion, and then adjusted a subsequent set to match the variance of the sample set. To isolate variance as the summary statistic of interest, the average emotion of both sets was random. Results suggested that observers had information regarding crowd variance. The second experiment verified that this was indeed a uniquely high-level phenomenon, as observers were unable to derive the variance of an inverted set of faces as precisely as an upright set of faces. The third experiment replicated and extended the first two experiments using method-of-constant-stimuli. Together, these results show that the visual system is sensitive to emergent information about the emotional heterogeneity, or ambivalence, in crowds of faces. PMID:26676106
Analysis of variance of microarray data.
Ayroles, Julien F; Gibson, Greg
2006-01-01
Analysis of variance (ANOVA) is an approach used to identify differentially expressed genes in complex experimental designs. It is based on testing for the significance of the magnitude of effect of two or more treatments taking into account the variance within and between treatment classes. ANOVA is a highly flexible analytical approach that allows investigators to simultaneously assess the contributions of multiple factors to gene expression variation, including technical (dye, batch) effects and biological (sex, genotype, drug, time) ones, as well as interactions between factors. This chapter provides an overview of the theory of linear mixture modeling and the sequence of steps involved in fitting gene-specific models and discusses essential features of experimental design. Commercial and open-source software for performing ANOVA is widely available. PMID:16939792
PHD filtering with localised target number variance
NASA Astrophysics Data System (ADS)
Delande, Emmanuel; Houssineau, Jérémie; Clark, Daniel
2013-05-01
Mahler's Probability Hypothesis Density (PHD filter), proposed in 2000, addresses the challenges of the multipletarget detection and tracking problem by propagating a mean density of the targets in any region of the state space. However, when retrieving some local evidence on the target presence becomes a critical component of a larger process - e.g. for sensor management purposes - the local target number is insufficient unless some confidence on the estimation of the number of targets can be provided as well. In this paper, we propose a first implementation of a PHD filter that also includes an estimation of localised variance in the target number following each update step; we then illustrate the advantage of the PHD filter + variance on simulated data from a multiple-target scenario.
Applications of non-parametric statistics and analysis of variance on sample variances
NASA Technical Reports Server (NTRS)
Myers, R. H.
1981-01-01
Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.
Analysis and application of minimum variance discrete time system identification
NASA Technical Reports Server (NTRS)
Kotob, S.; Kaufman, H.
1976-01-01
An on-line minimum variance parameter identifier was developed which embodies both accuracy and computational efficiency. The new formulation resulted in a linear estimation problem with both additive and multiplicative noise. The resulting filter is shown to utilize both the covariance of the parameter vector itself and the covariance of the error in identification. It is proven that the identification filter is mean square covergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.
Analysis and application of minimum variance discrete linear system identification
NASA Technical Reports Server (NTRS)
Kotob, S.; Kaufman, H.
1977-01-01
An on-line minimum variance (MV) parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise (AMN). The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean-square convergent and mean-square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.
Analysis of variance based on fuzzy observations
NASA Astrophysics Data System (ADS)
Nourbakhsh, M.; Mashinchi, M.; Parchami, A.
2013-04-01
Analysis of variance (ANOVA) is an important method in exploratory and confirmatory data analysis. The simplest type of ANOVA is one-way ANOVA for comparison among means of several populations. In this article, we extend one-way ANOVA to a case where observed data are fuzzy observations rather than real numbers. Two real-data examples are given to show the performance of this method.
The Theory of Variances in Equilibrium Reconstruction
Zakharov, Leonid E.; Lewandowski, Jerome; Foley, Elizabeth L.; Levinton, Fred M.; Yuh, Howard Y.; Drozdov, Vladimir; McDonald, Darren
2008-01-14
The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature.
Minimum variance and variance of outgoing quality limit MDS-1(c1, c2) plans
NASA Astrophysics Data System (ADS)
Raju, C.; Vidya, R.
2016-06-01
In this article, the outgoing quality (OQ) and total inspection (TI) of multiple deferred state sampling plans MDS-1(c1,c2) are studied. It is assumed that the inspection is rejection rectification. Procedures for designing MDS-1(c1,c2) sampling plans with minimum variance of OQ and TI are developed. A procedure for obtaining a plan for a designated upper limit for the variance of the OQ (VOQL) is outlined.
Hypothesis exploration with visualization of variance
2014-01-01
Background The Consortium for Neuropsychiatric Phenomics (CNP) at UCLA was an investigation into the biological bases of traits such as memory and response inhibition phenotypes—to explore whether they are linked to syndromes including ADHD, Bipolar disorder, and Schizophrenia. An aim of the consortium was in moving from traditional categorical approaches for psychiatric syndromes towards more quantitative approaches based on large-scale analysis of the space of human variation. It represented an application of phenomics—wide-scale, systematic study of phenotypes—to neuropsychiatry research. Results This paper reports on a system for exploration of hypotheses in data obtained from the LA2K, LA3C, and LA5C studies in CNP. ViVA is a system for exploratory data analysis using novel mathematical models and methods for visualization of variance. An example of these methods is called VISOVA, a combination of visualization and analysis of variance, with the flavor of exploration associated with ANOVA in biomedical hypothesis generation. It permits visual identification of phenotype profiles—patterns of values across phenotypes—that characterize groups. Visualization enables screening and refinement of hypotheses about variance structure of sets of phenotypes. Conclusions The ViVA system was designed for exploration of neuropsychiatric hypotheses by interdisciplinary teams. Automated visualization in ViVA supports ‘natural selection’ on a pool of hypotheses, and permits deeper understanding of the statistical architecture of the data. Large-scale perspective of this kind could lead to better neuropsychiatric diagnostics. PMID:25097666
Directional variance analysis of annual rings
NASA Astrophysics Data System (ADS)
Kumpulainen, P.; Marjanen, K.
2010-07-01
The wood quality measurement methods are of increasing importance in the wood industry. The goal is to produce more high quality products with higher marketing value than is produced today. One of the key factors for increasing the market value is to provide better measurements for increased information to support the decisions made later in the product chain. Strength and stiffness are important properties of the wood. They are related to mean annual ring width and its deviation. These indicators can be estimated from images taken from the log ends by two-dimensional power spectrum analysis. The spectrum analysis has been used successfully for images of pine. However, the annual rings in birch, for example are less distinguishable and the basic spectrum analysis method does not give reliable results. A novel method for local log end variance analysis based on Radon-transform is proposed. The directions and the positions of the annual rings can be estimated from local minimum and maximum variance estimates. Applying the spectrum analysis on the maximum local variance estimate instead of the original image produces more reliable estimate of the annual ring width. The proposed method is not limited to log end analysis only. It is usable in other two-dimensional random signal and texture analysis tasks.
Irreversible Langevin samplers and variance reduction: a large deviations approach
NASA Astrophysics Data System (ADS)
Rey-Bellet, Luc; Spiliopoulos, Konstantinos
2015-07-01
In order to sample from a given target distribution (often of Gibbs type), the Monte Carlo Markov chain method consists of constructing an ergodic Markov process whose invariant measure is the target distribution. By sampling the Markov process one can then compute, approximately, expectations of observables with respect to the target distribution. Often the Markov processes used in practice are time-reversible (i.e. they satisfy detailed balance), but our main goal here is to assess and quantify how the addition of a non-reversible part to the process can be used to improve the sampling properties. We focus on the diffusion setting (overdamped Langevin equations) where the drift consists of a gradient vector field as well as another drift which breaks the reversibility of the process but is chosen to preserve the Gibbs measure. In this paper we use the large deviation rate function for the empirical measure as a tool to analyze the speed of convergence to the invariant measure. We show that the addition of an irreversible drift leads to a larger rate function and it strictly improves the speed of convergence of ergodic average for (generic smooth) observables. We also deduce from this result that the asymptotic variance decreases under the addition of the irreversible drift and we give an explicit characterization of the observables whose variance is not reduced reduced, in terms of a nonlinear Poisson equation. Our theoretical results are illustrated and supplemented by numerical simulations.
Estimation of Noise-Free Variance to Measure Heterogeneity
Winkler, Tilo; Melo, Marcos F. Vidal; Degani-Costa, Luiza H.; Harris, R. Scott; Correia, John A.; Musch, Guido; Venegas, Jose G.
2015-01-01
Variance is a statistical parameter used to characterize heterogeneity or variability in data sets. However, measurements commonly include noise, as random errors superimposed to the actual value, which may substantially increase the variance compared to a noise-free data set. Our aim was to develop and validate a method to estimate noise-free spatial heterogeneity of pulmonary perfusion using dynamic positron emission tomography (PET) scans. On theoretical grounds, we demonstrate a linear relationship between the total variance of a data set derived from averages of n multiple measurements, and the reciprocal of n. Using multiple measurements with varying n yields estimates of the linear relationship including the noise-free variance as the constant parameter. In PET images, n is proportional to the number of registered decay events, and the variance of the image is typically normalized by the square of its mean value yielding a coefficient of variation squared (CV2). The method was evaluated with a Jaszczak phantom as reference spatial heterogeneity (CVr2) for comparison with our estimate of noise-free or ‘true’ heterogeneity (CVt2). We found that CVt2 was only 5.4% higher than CVr2. Additional evaluations were conducted on 38 PET scans of pulmonary perfusion using 13NN-saline injection. The mean CVt2 was 0.10 (range: 0.03–0.30), while the mean CV2 including noise was 0.24 (range: 0.10–0.59). CVt2 was in average 41.5% of the CV2 measured including noise (range: 17.8–71.2%). The reproducibility of CVt2 was evaluated using three repeated PET scans from five subjects. Individual CVt2 were within 16% of each subject's mean and paired t-tests revealed no difference among the results from the three consecutive PET scans. In conclusion, our method provides reliable noise-free estimates of CVt2 in PET scans, and may be useful for similar statistical problems in experimental data. PMID:25906374
FMRI group analysis combining effect estimates and their variances.
Chen, Gang; Saad, Ziad S; Nath, Audrey R; Beauchamp, Michael S; Cox, Robert W
2012-03-01
Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an approach that does not make such strong assumptions, and present a computationally efficient frequentist approach to FMRI group analysis, which we term mixed-effects multilevel analysis (MEMA), that incorporates both the variability across subjects and the precision estimate of each effect of interest from individual subject analyses. On average, the more accurate tests result in higher statistical power, especially when conventional variance assumptions do not hold, or in the presence of outliers. In addition, various heterogeneity measures are available with MEMA that may assist the investigator in further improving the modeling. Our method allows group effect t-tests and comparisons among conditions and among groups. In addition, it has the capability to incorporate subject-specific covariates such as age, IQ, or behavioral data. Simulations were performed to illustrate power comparisons and the capability of controlling type I errors among various significance testing methods, and the results indicated that the testing statistic we adopted struck a good balance between power gain and type I error control. Our approach is instantiated in an open-source, freely distributed program that may be used on any dataset stored in the universal neuroimaging file transfer (NIfTI) format. To date, the main impediment for more accurate testing that incorporates both within- and cross-subject variability has been the high computational cost. Our efficient implementation makes this approach
Clarke, Peter; Varghese, Philip; Goldstein, David
2014-12-09
We extend a variance reduced discrete velocity method developed at UT Austin [1, 2] to gas mixtures with large mass ratios and flows with trace species. The mixture is stored as a collection of independent velocity distribution functions, each with a unique grid in velocity space. Different collision types (A-A, A-B, B-B, etc.) are treated independently, and the variance reduction scheme is formulated with different equilibrium functions for each separate collision type. The individual treatment of species enables increased focus on species important to the physics of the flow, even if the important species are present in trace amounts. The method is verified through comparisons to Direct Simulation Monte Carlo computations and the computational workload per time step is investigated for the variance reduced method.
Influence of genetic variance on sodium sensitivity of blood pressure.
Luft, F C; Miller, J Z; Weinberger, M H; Grim, C E; Daugherty, S A; Christian, J C
1987-02-01
To examine the effect of genetic variance on blood pressure, sodium homeostasis, and its regulatory determinants, we studied 37 pairs of monozygotic twins and 18 pairs of dizygotic twins under conditions of volume expansion and contraction. We found that, in addition to blood pressure and body size, sodium excretion in response to provocative maneuvers, glomerular filtration rate, the renin-angiotensin system, and the sympathetic nervous system are influenced by genetic variance. To elucidate the interaction of genetic factors and an environmental influence, namely, salt intake, we restricted dietary sodium in 44 families of twin children. In addition to a modest decrease in blood pressure, we found heterogeneous responses in blood pressure indicative of sodium sensitivity and resistance which were normally distributed. Strong parent-offspring resemblances were found in baseline blood pressures which persisted when adjustments were made for age and weight. Further, mother-offspring resemblances were observed in the change in blood pressure with sodium restriction. We conclude that the control of sodium homeostasis is heritable and that the change in blood pressure with sodium restriction is familial as well. These data speak to the interaction between the genetic susceptibility to hypertension and environmental influences which may result in its expression. PMID:3553721
Visual SLAM Using Variance Grid Maps
NASA Technical Reports Server (NTRS)
Howard, Andrew B.; Marks, Tim K.
2011-01-01
An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance
Probability of the residual wavefront variance of an adaptive optics system and its application.
Huang, Jian; Liu, Chao; Deng, Ke; Yao, Zhousi; Xian, Hao; Li, Xinyang
2016-02-01
For performance evaluation of an adaptive optics (AO) system, the probability of the system residual wavefront variance can provide more information than the wavefront variance average. By studying the Zernike coefficients of an AO system residual wavefront, we derived the exact expressions for the probability density functions of the wavefront variance and the Strehl ratio, for instantaneous and long-term exposures owing to the insufficient control loop bandwidth of the AO system. Our calculations agree with the residual wavefront data of a closed loop AO system. Using these functions, we investigated the relationship between the AO system bandwidth and the distribution of the residual wavefront variance. Additionally, we analyzed the availability of an AO system for evaluating the AO performance. These results will assist in designing and probabilistic analysis of AO systems. PMID:26906850
An Empirical Temperature Variance Source Model in Heated Jets
NASA Technical Reports Server (NTRS)
Khavaran, Abbas; Bridges, James
2012-01-01
An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.
Estimators for variance components in structured stair nesting models
NASA Astrophysics Data System (ADS)
Monteiro, Sandra; Fonseca, Miguel; Carvalho, Francisco
2016-06-01
The purpose of this paper is to present the estimation of the components of variance in structured stair nesting models. The relationship between the canonical variance components and the original ones, will be very important in obtaining that estimators.
What motivates nonconformity? Uniqueness seeking blocks majority influence.
Imhoff, Roland; Erb, Hans-Peter
2009-03-01
A high need for uniqueness undermines majority influence. Need for uniqueness (a) is a psychological state in which individuals feel indistinguishable from others and (b) motivates compensatory acts to reestablish a sense of uniqueness. Three studies demonstrate that a strive for uniqueness motivates individuals to resist majority influence. In Study 1, the need for uniqueness was measured, and it was found that individuals high in need for uniqueness yielded less to majority influence than those low in need for uniqueness. In Study 2, participants who received personality feedback undermining their feeling of uniqueness agreed less with a majority (vs. minority) position. Study 3 replicated this effect and additionally demonstrated the motivational nature of the assumed mechanism: An alternative means that allowed participants to regain a feeling of uniqueness canceled out the effect of high need for uniqueness on majority influence. PMID:19098256
Latent Variable Models of Need for Uniqueness.
Tepper, K; Hoyle, R H
1996-10-01
The theory of uniqueness has been invoked to explain attitudinal and behavioral nonconformity with respect to peer-group, social-cultural, and statistical norms, as well as the development of a distinctive view of self via seeking novelty goods, adopting new products, acquiring scarce commodities, and amassing material possessions. Present research endeavors in psychology and consumer behavior are inhibited by uncertainty regarding the psychometric properties of the Need for Uniqueness Scale, the primary instrument for measuring individual differences in uniqueness motivation. In an important step toward facilitating research on uniqueness motivation, we used confirmatory factor analysis to evaluate three a priori latent variable models of responses to the Need for Uniqueness Scale. Among the a priori models, an oblique three-factor model best accounted for commonality among items. Exploratory factor analysis followed by estimation of unrestricted three- and four-factor models revealed that a model with a complex pattern of loadings on four modestly correlated factors may best explain the latent structure of the Need for Uniqueness Scale. Additional analyses evaluated the associations among the three a priori factors and an array of individual differences. Results of those analyses indicated the need to distinguish among facets of the uniqueness motive in behavioral research. PMID:26788594
40 CFR 124.62 - Decision on variances.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Decision on variances. 124.62 Section... FOR DECISIONMAKING Specific Procedures Applicable to NPDES Permits § 124.62 Decision on variances... following variances (subject to EPA objection under § 123.44 for State permits): (1) Extensions under...
40 CFR 59.509 - Can I get a variance?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Can I get a variance? 59.509 Section 59... Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a) Any... its reasonable control may apply in writing to the Administrator for a temporary variance....
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Variances and exceptions. 27... CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws may provide for variances and exceptions. (b) Bylaws adopted pursuant to these standards shall...
20 CFR 901.40 - Proof; variance; amendment of pleadings.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Proof; variance; amendment of pleadings. 901... Suspension or Termination of Enrollment § 901.40 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in a pleading and the evidence adduced in support of the pleading,...
31 CFR 10.67 - Proof; variance; amendment of pleadings.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Proof; variance; amendment of... BEFORE THE INTERNAL REVENUE SERVICE Rules Applicable to Disciplinary Proceedings § 10.67 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in pleadings and the...
7 CFR 718.105 - Tolerances, variances, and adjustments.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 7 2010-01-01 2010-01-01 false Tolerances, variances, and adjustments. 718.105... APPLICABLE TO MULTIPLE PROGRAMS Determination of Acreage and Compliance § 718.105 Tolerances, variances, and... marketing quota crop allotment. (d) An administrative variance is applicable to all allotment crop...
40 CFR 52.1390 - Missoula variance provision.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 4 2010-07-01 2010-07-01 false Missoula variance provision. 52.1390... (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) Montana § 52.1390 Missoula variance provision. The Missoula City-County Air Pollution Control Program's Chapter X, Variances, which was...
29 CFR 1905.5 - Effect of variances.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 5 2010-07-01 2010-07-01 false Effect of variances. 1905.5 Section 1905.5 Labor... RULES OF PRACTICE FOR VARIANCES, LIMITATIONS, VARIATIONS, TOLERANCES, AND EXEMPTIONS UNDER THE WILLIAMS-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 General § 1905.5 Effect of variances. All...
40 CFR 190.11 - Variances for unusual operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Variances for unusual operations. 190... Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified in § 190.10 may be exceeded if: (a) The regulatory agency has granted a variance based upon...
40 CFR 124.64 - Appeals of variances.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Appeals of variances. 124.64 Section... FOR DECISIONMAKING Specific Procedures Applicable to NPDES Permits § 124.64 Appeals of variances. (a) When a State issues a permit on which EPA has made a variance decision, separate appeals of the...
31 CFR 8.59 - Proof; variance; amendment of pleadings.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Proof; variance; amendment of... BEFORE THE BUREAU OF ALCOHOL, TOBACCO AND FIREARMS Disciplinary Proceedings § 8.59 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in a pleading, the...
36 CFR 30.5 - Variances, exceptions, and use permits.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Variances, exceptions, and... UNIT § 30.5 Variances, exceptions, and use permits. (a) Zoning ordinances or amendments thereto, for... Recreation Area may provide for the granting of variances and exceptions. (b) Zoning ordinances or...
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions. (a) Variances or exemptions from certain provisions...
29 CFR 1905.5 - Effect of variances.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 29 Labor 5 2014-07-01 2014-07-01 false Effect of variances. 1905.5 Section 1905.5 Labor...-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 General § 1905.5 Effect of variances. All variances granted pursuant to this part shall have only future effect. In his discretion, the Assistant...
Recognition by variance: learning rules for spatiotemporal patterns.
Barak, Omri; Tsodyks, Misha
2006-10-01
Recognizing specific spatiotemporal patterns of activity, which take place at timescales much larger than the synaptic transmission and membrane time constants, is a demand from the nervous system exemplified, for instance, by auditory processing. We consider the total synaptic input that a single readout neuron receives on presentation of spatiotemporal spiking input patterns. Relying on the monotonic relation between the mean and the variance of a neuron's input current and its spiking output, we derive learning rules that increase the variance of the input current evoked by learned patterns relative to that obtained from random background patterns. We demonstrate that the model can successfully recognize a large number of patterns and exhibits a slow deterioration in performance with increasing number of learned patterns. In addition, robustness to time warping of the input patterns is revealed to be an emergent property of the model. Using a leaky integrate-and-fire realization of the readout neuron, we demonstrate that the above results also apply when considering spiking output. PMID:16907629
Argentine Population Genetic Structure: Large Variance in Amerindian Contribution
Seldin, Michael F.; Tian, Chao; Shigeta, Russell; Scherbarth, Hugo R.; Silva, Gabriel; Belmont, John W.; Kittles, Rick; Gamron, Susana; Allevi, Alberto; Palatnik, Simon A.; Alvarellos, Alejandro; Paira, Sergio; Caprarulo, Cesar; Guillerón, Carolina; Catoggio, Luis J.; Prigione, Cristina; Berbotto, Guillermo A.; García, Mercedes A.; Perandones, Carlos E.; Pons-Estel, Bernardo A.; Alarcon-Riquelme, Marta E.
2011-01-01
Argentine population genetic structure was examined using a set of 78 ancestry informative markers (AIMs) to assess the contributions of European, Amerindian, and African ancestry in 94 individuals members of this population. Using the Bayesian clustering algorithm STRUCTURE, the mean European contribution was 78%, the Amerindian contribution was 19.4%, and the African contribution was 2.5%. Similar results were found using weighted least mean square method: European, 80.2%; Amerindian, 18.1%; and African, 1.7%. Consistent with previous studies the current results showed very few individuals (four of 94) with greater than 10% African admixture. Notably, when individual admixture was examined, the Amerindian and European admixture showed a very large variance and individual Amerindian contribution ranged from 1.5 to 84.5% in the 94 individual Argentine subjects. These results indicate that admixture must be considered when clinical epidemiology or case control genetic analyses are studied in this population. Moreover, the current study provides a set of informative SNPs that can be used to ascertain or control for this potentially hidden stratification. In addition, the large variance in admixture proportions in individual Argentine subjects shown by this study suggests that this population is appropriate for future admixture mapping studies. PMID:17177183
The Parabolic Variance (PVAR): A Wavelet Variance Based on the Least-Square Fit.
Vernotte, Francois; Lenczner, Michel; Bourgeois, Pierre-Yves; Rubiola, Enrico
2016-04-01
This paper introduces the parabolic variance (PVAR), a wavelet variance similar to the Allan variance (AVAR), based on the linear regression (LR) of phase data. The companion article arXiv:1506.05009 [physics.ins-det] details the Ω frequency counter, which implements the LR estimate. The PVAR combines the advantages of AVAR and modified AVAR (MVAR). PVAR is good for long-term analysis because the wavelet spans over 2τ, the same as the AVAR wavelet, and good for short-term analysis because the response to white and flicker PM is 1/τ(3) and 1/τ(2), the same as the MVAR. After setting the theoretical framework, we study the degrees of freedom and the confidence interval for the most common noise types. Then, we focus on the detection of a weak noise process at the transition-or corner-where a faster process rolls off. This new perspective raises the question of which variance detects the weak process with the shortest data record. Our simulations show that PVAR is a fortunate tradeoff. PVAR is superior to MVAR in all cases, exhibits the best ability to divide between fast noise phenomena (up to flicker FM), and is almost as good as AVAR for the detection of random walk and drift. PMID:26571523
Dynamics of mean-variance-skewness of cumulative crop yield impact temporal yield variance
Technology Transfer Automated Retrieval System (TEKTRAN)
Production risk associated with cropping systems influences farmers’ decisions to adopt a new management practice or a production system. Cumulative yield (CY), temporal yield variance (TYV) and coefficient of variation (CV) were used to assess the risk associated with adopting combinations of new m...
The variance of the adjusted Rand index.
Steinley, Douglas; Brusco, Michael J; Hubert, Lawrence
2016-06-01
For 30 years, the adjusted Rand index has been the preferred method for comparing 2 partitions (e.g., clusterings) of a set of observations. Although the index is widely used, little is known about its variability. Herein, the variance of the adjusted Rand index (Hubert & Arabie, 1985) is provided and its properties are explored. It is shown that a normal approximation is appropriate across a wide range of sample sizes and varying numbers of clusters. Further, it is shown that confidence intervals based on the normal distribution have desirable levels of coverage and accuracy. Finally, the first power analysis evaluating the ability to detect differences between 2, different adjusted Rand indices is provided. (PsycINFO Database Record PMID:26881693
Motion Detection Using Mean Normalized Temporal Variance
Chan, C W
2003-08-04
Scene-Based Wave Front Sensing uses the correlation between successive wavelets to determine the phase aberrations which cause the blurring of digital images. Adaptive Optics technology uses that information to control deformable mirrors to correct for the phase aberrations making the image clearer. The correlation between temporal subimages gives tip-tilt information. If these images do not have identical image content, tip-tilt estimations may be incorrect. Motion detection is necessary to help avoid errors initiated by dynamic subimage content. With a finely limited number of pixels per subaperature, most conventional motion detection algorithms fall apart on our subimages. Despite this fact, motion detection based on the normalized variance of individual pixels proved to be effective.
Calculating bone-lead measurement variance.
Todd, A C
2000-01-01
The technique of (109)Cd-based X-ray fluorescence (XRF) measurements of lead in bone is well established. A paper by some XRF researchers [Gordon CL, et al. The Reproducibility of (109)Cd-based X-ray Fluorescence Measurements of Bone Lead. Environ Health Perspect 102:690-694 (1994)] presented the currently practiced method for calculating the variance of an in vivo measurement once a calibration line has been established. This paper corrects typographical errors in the method published by those authors; presents a crude estimate of the measurement error that can be acquired without computational peak fitting programs; and draws attention to the measurement error attributable to covariance, an important feature in the construct of the currently accepted method that is flawed under certain circumstances. PMID:10811562
Variance-based interaction index measuring heteroscedasticity
NASA Astrophysics Data System (ADS)
Ito, Keiichi; Couckuyt, Ivo; Poles, Silvia; Dhaene, Tom
2016-06-01
This work is motivated by the need to deal with models with high-dimensional input spaces of real variables. One way to tackle high-dimensional problems is to identify interaction or non-interaction among input parameters. We propose a new variance-based sensitivity interaction index that can detect and quantify interactions among the input variables of mathematical functions and computer simulations. The computation is very similar to first-order sensitivity indices by Sobol'. The proposed interaction index can quantify the relative importance of input variables in interaction. Furthermore, detection of non-interaction for screening can be done with as low as 4 n + 2 function evaluations, where n is the number of input variables. Using the interaction indices based on heteroscedasticity, the original function may be decomposed into a set of lower dimensional functions which may then be analyzed separately.
Individualized Additional Instruction for Calculus
ERIC Educational Resources Information Center
Takata, Ken
2010-01-01
College students enrolling in the calculus sequence have a wide variance in their preparation and abilities, yet they are usually taught from the same lecture. We describe another pedagogical model of Individualized Additional Instruction (IAI) that assesses each student frequently and prescribes further instruction and homework based on the…
Event Segmentation Ability Uniquely Predicts Event Memory
Sargent, Jesse Q.; Zacks, Jeffrey M.; Hambrick, David Z.; Zacks, Rose T.; Kurby, Christopher A.; Bailey, Heather R.; Eisenberg, Michelle L.; Beck, Taylor M.
2013-01-01
Memory for everyday events plays a central role in tasks of daily living, autobiographical memory, and planning. Event memory depends in part on segmenting ongoing activity into meaningful units. This study examined the relationship between event segmentation and memory in a lifespan sample to answer the following question: Is the ability to segment activity into meaningful events a unique predictor of subsequent memory, or is the relationship between event perception and memory accounted for by general cognitive abilities? Two hundred and eight adults ranging from 20 to 79 years old segmented movies of everyday events and attempted to remember the events afterwards. They also completed psychometric ability tests and tests measuring script knowledge for everyday events. Event segmentation and script knowledge both explained unique variance in event memory above and beyond the psychometric measures, and did so as strongly in older as in younger adults. These results suggest that event segmentation is a basic cognitive mechanism, important for memory across the lifespan. PMID:23942350
Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†
Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia
2015-01-01
Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144
Methods to estimate the between-study variance and its uncertainty in meta-analysis.
Veroniki, Areti Angeliki; Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian P T; Langan, Dean; Salanti, Georgia
2016-03-01
Meta-analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between-study variability, which is typically modelled using a between-study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between-study variance, has been long challenged. Our aim is to identify known methods for estimation of the between-study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between-study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between-study variance. Based on the scenarios and results presented in the published studies, we recommend the Q-profile method and the alternative approach based on a 'generalised Cochran between-study variance statistic' to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence-based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. PMID:26332144
Is nonmedical prescription opiate use a unique form of illicit drug use?
Catalano, Richard F; White, Helene R; Fleming, Charles B; Haggerty, Kevin P
2011-01-01
Nonmedical prescription opiate (NMPO) use is of great concern because of its high addiction potential, cognitive impairment effects, and other adverse consequences (e.g., hormonal and immune system effects, hyperalgesia and overdose). Due to the combination of drugs used by those who are NMPO users, it is difficult to isolate the negative effects of NMPO use from the effects of other legal and illicit drugs. Based on a stage model of substance use, this study tested whether NMPO use represents a unique form of illicit drug use among emerging adults and whether there are unique consequences of early NMPO use. We used longitudinal data from 912 emerging adults from the Raising Healthy Children study who were interviewed at least annually from the first or second grade through age 21. The findings indicated that almost all NMPO users have also used marijuana and a large majority has also used other drugs, such as cocaine and ecstasy. In addition, more frequent users of NMPOs are also more frequent users of other drugs. Except for violent behavior, NMPO use explained little unique variance in negative outcomes of use (e.g., drug use disorder, mood disorder, nonproductive behavior, poor health, and property crime) beyond that explained by other illicit drug use. Future studies examining the predictors or consequences of NMPO use and nonmedical use of other prescription drugs need to consider use within the context of other drug use. PMID:20864261
Estimating discharge measurement uncertainty using the interpolated variance estimator
Cohn, T.; Kiang, J.; Mason, R., Jr.
2012-01-01
Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.
Modularity, comparative cognition and human uniqueness.
Shettleworth, Sara J
2012-10-01
Darwin's claim 'that the difference in mind between man and the higher animals … is certainly one of degree and not of kind' is at the core of the comparative study of cognition. Recent research provides unprecedented support for Darwin's claim as well as new reasons to question it, stimulating new theories of human cognitive uniqueness. This article compares and evaluates approaches to such theories. Some prominent theories propose sweeping domain-general characterizations of the difference in cognitive capabilities and/or mechanisms between adult humans and other animals. Dual-process theories for some cognitive domains propose that adult human cognition shares simple basic processes with that of other animals while additionally including slower-developing and more explicit uniquely human processes. These theories are consistent with a modular account of cognition and the 'core knowledge' account of children's cognitive development. A complementary proposal is that human infants have unique social and/or cognitive adaptations for uniquely human learning. A view of human cognitive architecture as a mosaic of unique and species-general modular and domain-general processes together with a focus on uniquely human developmental mechanisms is consistent with modern evolutionary-developmental biology and suggests new questions for comparative research. PMID:22927578
Modularity, comparative cognition and human uniqueness
Shettleworth, Sara J.
2012-01-01
Darwin's claim ‘that the difference in mind between man and the higher animals … is certainly one of degree and not of kind’ is at the core of the comparative study of cognition. Recent research provides unprecedented support for Darwin's claim as well as new reasons to question it, stimulating new theories of human cognitive uniqueness. This article compares and evaluates approaches to such theories. Some prominent theories propose sweeping domain-general characterizations of the difference in cognitive capabilities and/or mechanisms between adult humans and other animals. Dual-process theories for some cognitive domains propose that adult human cognition shares simple basic processes with that of other animals while additionally including slower-developing and more explicit uniquely human processes. These theories are consistent with a modular account of cognition and the ‘core knowledge’ account of children's cognitive development. A complementary proposal is that human infants have unique social and/or cognitive adaptations for uniquely human learning. A view of human cognitive architecture as a mosaic of unique and species-general modular and domain-general processes together with a focus on uniquely human developmental mechanisms is consistent with modern evolutionary-developmental biology and suggests new questions for comparative research. PMID:22927578
Respiratory infections unique to Asia.
Tsang, Kenneth W; File, Thomas M
2008-11-01
Asia is a highly heterogeneous region with vastly different cultures, social constitutions and populations affected by a wide spectrum of respiratory diseases caused by tropical pathogens. Asian patients with community-acquired pneumonia differ from their Western counterparts in microbiological aetiology, in particular the prominence of Gram-negative organisms, Mycobacterium tuberculosis, Burkholderia pseudomallei and Staphylococcus aureus. In addition, the differences in socioeconomic and health-care infrastructures limit the usefulness of Western management guidelines for pneumonia in Asia. The importance of emerging infectious diseases such as severe acute respiratory syndrome and avian influenza infection remain as close concerns for practising respirologists in Asia. Specific infections such as melioidosis, dengue haemorrhagic fever, scrub typhus, leptospirosis, salmonellosis, penicilliosis marneffei, malaria, amoebiasis, paragonimiasis, strongyloidiasis, gnathostomiasis, trinchinellosis, schistosomiasis and echinococcosis occur commonly in Asia and manifest with a prominent respiratory component. Pulmonary eosinophilia, endemic in parts of Asia, could occur with a wide range of tropical infections. Tropical eosinophilia is believed to be a hyper-sensitivity reaction to degenerating microfilariae trapped in the lungs. This article attempts to address the key respiratory issues in these respiratory infections unique to Asia and highlight the important diagnostic and management issues faced by practising respirologists. PMID:18945321
Explanatory Variance in Maximal Oxygen Uptake
Robert McComb, Jacalyn J.; Roh, Daesung; Williams, James S.
2006-01-01
The purpose of this study was to develop a prediction equation that could be used to estimate maximal oxygen uptake (VO2max) from a submaximal water running protocol. Thirty-two volunteers (n =19 males, n = 13 females), ages 18 - 24 years, underwent the following testing procedures: (a) a 7-site skin fold assessment; (b) a land VO2max running treadmill test; and (c) a 6 min water running test. For the water running submaximal protocol, the participants were fitted with an Aqua Jogger Classic Uni-Sex Belt and a Polar Heart Rate Monitor; the participants’ head, shoulders, hips and feet were vertically aligned, using a modified running/bicycle motion. A regression model was used to predict VO2max. The criterion variable, VO2max, was measured using open-circuit calorimetry utilizing the Bruce Treadmill Protocol. Predictor variables included in the model were percent body fat (% BF), height, weight, gender, and heart rate following a 6 min water running protocol. Percent body fat accounted for 76% (r = -0.87, SEE = 3.27) of the variance in VO2max. No other variables significantly contributed to the explained variance in VO2max. The equation for the estimation of VO2max is as follows: VO2max ml.kg-1·min-1 = 56.14 - 0.92 (% BF). Key Points Body Fat is an important predictor of VO2 max. Individuals with low skill level in water running may shorten their stride length to avoid the onset of fatigue at higher work-loads, therefore, the net oxygen cost of the exercise cannot be controlled in inexperienced individuals in water running at fatiguing workloads. Experiments using water running protocols to predict VO2max should use individuals trained in the mechanics of water running. A submaximal water running protocol is needed in the research literature for individuals trained in the mechanics of water running, given the popularity of water running rehabilitative exercise programs and training programs. PMID:24260003
Cahyadi, Muhammad; Park, Hee-Bok; Seo, Dong-Won; Jin, Shil; Choi, Nuri; Heo, Kang-Nyeong; Kang, Bo-Seok; Jo, Cheorun; Lee, Jun-Heon
2016-01-01
Quantitative trait locus (QTL) is a particular region of the genome containing one or more genes associated with economically important quantitative traits. This study was conducted to identify QTL regions for body weight and growth traits in purebred Korean native chicken (KNC). F1 samples (n = 595) were genotyped using 127 microsatellite markers and 8 single nucleotide polymorphisms that covered 2,616.1 centi Morgan (cM) of map length for 26 autosomal linkage groups. Body weight traits were measured every 2 weeks from hatch to 20 weeks of age. Weight of half carcass was also collected together with growth rate. A multipoint variance component linkage approach was used to identify QTLs for the body weight traits. Two significant QTLs for growth were identified on chicken chromosome 3 (GGA3) for growth 16 to18 weeks (logarithm of the odds [LOD] = 3.24, Nominal p value = 0.0001) and GGA4 for growth 6 to 8 weeks (LOD = 2.88, Nominal p value = 0.0003). Additionally, one significant QTL and three suggestive QTLs were detected for body weight traits in KNC; significant QTL for body weight at 4 weeks (LOD = 2.52, nominal p value = 0.0007) and suggestive QTL for 8 weeks (LOD = 1.96, Nominal p value = 0.0027) were detected on GGA4; QTLs were also detected for two different body weight traits: body weight at 16 weeks on GGA3 and body weight at 18 weeks on GGA19. Additionally, two suggestive QTLs for carcass weight were detected at 0 and 70 cM on GGA19. In conclusion, the current study identified several significant and suggestive QTLs that affect growth related traits in a unique resource pedigree in purebred KNC. This information will contribute to improving the body weight traits in native chicken breeds, especially for the Asian native chicken breeds. PMID:26732327
The genetic and environmental roots of variance in negativity toward foreign nationals.
Kandler, Christian; Lewis, Gary J; Feldhaus, Lea Henrike; Riemann, Rainer
2015-03-01
This study quantified genetic and environmental roots of variance in prejudice and discriminatory intent toward foreign nationals and examined potential mediators of these genetic influences: right-wing authoritarianism (RWA), social dominance orientation (SDO), and narrow-sense xenophobia (NSX). In line with the dual process motivational (DPM) model, we predicted that the two basic attitudinal and motivational orientations-RWA and SDO-would account for variance in out-group prejudice and discrimination. In line with other theories, we expected that NSX as an affective component would explain additional variance in out-group prejudice and discriminatory intent. Data from 1,397 individuals (incl. twins as well as their spouses) were analyzed. Univariate analyses of twins' and spouses' data yielded genetic (incl. contributions of assortative mating) and multiple environmental sources (i.e., social homogamy, spouse-specific, and individual-specific effects) of variance in negativity toward strangers. Multivariate analyses suggested an extension to the DPM model by including NSX in addition to RWA and SDO as predictor of prejudice and discrimination. RWA and NSX primarily mediated the genetic influences on the variance in prejudice and discriminatory intent toward foreign nationals. In sum, the findings provide the basis of a behavioral genetic framework integrating different scientific disciplines for the study of negativity toward out-groups. PMID:25534512
Variance component estimates for alternative litter size traits in swine.
Putz, A M; Tiezzi, F; Maltecca, C; Gray, K A; Knauer, M T
2015-11-01
Litter size at d 5 (LS5) has been shown to be an effective trait to increase total number born (TNB) while simultaneously decreasing preweaning mortality. The objective of this study was to determine the optimal litter size day for selection (i.e., other than d 5). Traits included TNB, number born alive (NBA), litter size at d 2, 5, 10, 30 (LS2, LS5, LS10, LS30, respectively), litter size at weaning (LSW), number weaned (NW), piglet mortality at d 30 (MortD30), and average piglet birth weight (BirthWt). Litter size traits were assigned to biological litters and treated as a trait of the sow. In contrast, NW was the number of piglets weaned by the nurse dam. Bivariate animal models included farm, year-season, and parity as fixed effects. Number born alive was fit as a covariate for BirthWt. Random effects included additive genetics and the permanent environment of the sow. Variance components were plotted for TNB, NBA, and LS2 to LS30 using univariate animal models to determine how variances changed over time. Additive genetic variance was minimized at d 7 in Large White and at d 14 in Landrace pigs. Total phenotypic variance for litter size traits decreased over the first 10 d and then stabilized. Heritability estimates increased between TNB and LS30. Genetic correlations between TNB, NBA, and LS2 to LS29 with LS30 plateaued within the first 10 d. A genetic correlation with LS30 of 0.95 was reached at d 4 for Large White and at d 8 for Landrace pigs. Heritability estimates ranged from 0.07 to 0.13 for litter size traits and MortD30. Birth weight had an h of 0.24 and 0.26 for Large White and Landrace pigs, respectively. Genetic correlations among LS30, LSW, and NW ranged from 0.97 to 1.00. In the Large White breed, genetic correlations between MortD30 with TNB and LS30 were 0.23 and -0.64, respectively. These correlations were 0.10 and -0.61 in the Landrace breed. A high genetic correlation of 0.98 and 0.97 was observed between LS10 and NW for Large White and
Cyclostationary analysis with logarithmic variance stabilisation
NASA Astrophysics Data System (ADS)
Borghesani, Pietro; Shahriar, Md Rifat
2016-03-01
Second order cyclostationary (CS2) components in vibration or acoustic emission signals are typical symptoms of a wide variety of faults in rotating and alternating mechanical systems. The square envelope spectrum (SES), obtained via Hilbert transform of the original signal, is at the basis of the most common indicators used for detection of CS2 components. It has been shown that the SES is equivalent to an autocorrelation of the signal's discrete Fourier transform, and that CS2 components are a cause of high correlations in the frequency domain of the signal, thus resulting in peaks in the SES. Statistical tests have been proposed to determine if peaks in the SES are likely to belong to a normal variability in the signal or if they are proper symptoms of CS2 components. Despite the need for automated fault recognition and the theoretical soundness of these tests, this approach to machine diagnostics has been mostly neglected in industrial applications. In fact, in a series of experimental applications, even with proper pre-whitening steps, it has been found that healthy machines might produce high spectral correlations and therefore result in a highly biased SES distribution which might cause a series of false positives. In this paper a new envelope spectrum is defined, with the theoretical intent of rendering the hypothesis test variance-free. This newly proposed indicator will prove unbiased in case of multiple CS2 sources of spectral correlation, thus reducing the risk of false alarms.
2012-09-01
This final rule adopts the standard for a national unique health plan identifier (HPID) and establishes requirements for the implementation of the HPID. In addition, it adopts a data element that will serve as an other entity identifier (OEID), or an identifier for entities that are not health plans, health care providers, or individuals, but that need to be identified in standard transactions. This final rule also specifies the circumstances under which an organization covered health care provider must require certain noncovered individual health care providers who are prescribers to obtain and disclose a National Provider Identifier (NPI). Lastly, this final rule changes the compliance date for the International Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM) for diagnosis coding, including the Official ICD-10-CM Guidelines for Coding and Reporting, and the International Classification of Diseases, 10th Revision, Procedure Coding System (ICD-10-PCS) for inpatient hospital procedure coding, including the Official ICD-10-PCS Guidelines for Coding and Reporting, from October 1, 2013 to October 1, 2014. PMID:22950146
Exploring Unique Roles for Psychologists
ERIC Educational Resources Information Center
Ahmed, Mohiuddin; Boisvert, Charles M.
2005-01-01
This paper presents comments on "Psychological Treatments" by D. H. Barlow. Barlow highlighted unique roles that psychologists can play in mental health service delivery by providing psychological treatments--treatments that psychologists would be uniquely qualified to design and deliver. In support of Barlow's position, the authors draw from…
ERIC Educational Resources Information Center
Shipman, Barbara A.
2013-01-01
This article analyzes four questions on the meaning of uniqueness that have contrasting answers in common language versus mathematical language. The investigations stem from a scenario in which students interpreted uniqueness according to a definition from standard English, that is, different from the mathematical meaning, in defining an injective…
CYP1B1: a unique gene with unique characteristics.
Faiq, Muneeb A; Dada, Rima; Sharma, Reetika; Saluja, Daman; Dada, Tanuj
2014-01-01
CYP1B1, a recently described dioxin inducible oxidoreductase, is a member of the cytochrome P450 superfamily involved in the metabolism of estradiol, retinol, benzo[a]pyrene, tamoxifen, melatonin, sterols etc. It plays important roles in numerous physiological processes and is expressed at mRNA level in many tissues and anatomical compartments. CYP1B1 has been implicated in scores of disorders. Analyses of the recent studies suggest that CYP1B1 can serve as a universal/ideal cancer marker and a candidate gene for predictive diagnosis. There is plethora of literature available about certain aspects of CYP1B1 that have not been interpreted, discussed and philosophized upon. The present analysis examines CYP1B1 as a peculiar gene with certain distinctive characteristics like the uniqueness in its chromosomal location, gene structure and organization, involvement in developmentally important disorders, tissue specific, not only expression, but splicing, potential as a universal cancer marker due to its involvement in key aspects of cellular metabolism, use in diagnosis and predictive diagnosis of various diseases and the importance and function of CYP1B1 mRNA in addition to the regular translation. Also CYP1B1 is very difficult to express in heterologous expression systems, thereby, halting its functional studies. Here we review and analyze these exceptional and startling characteristics of CYP1B1 with inputs from our own experiences in order to get a better insight into its molecular biology in health and disease. This may help to further understand the etiopathomechanistic aspects of CYP1B1 mediated diseases paving way for better research strategies and improved clinical management. PMID:25658124
Food additives are substances that become part of a food product when they are added during the processing or making of that food. "Direct" food additives are often added during processing to: Add nutrients ...
Estimating the encounter rate variance in distance sampling
Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.
2009-01-01
The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.
A Note on Noncentrality Parameters for Contrast Tests in a One-Way Analysis of Variance
ERIC Educational Resources Information Center
Liu, Xiaofeng Steven
2010-01-01
The noncentrality parameter for a contrast test in a one-way analysis of variance is based on the dot product of 2 vectors whose geometric meaning in a Euclidian space offers mnemonic hints about its constituents. Additionally, the noncentrality parameters for a set of orthogonal contrasts sum up to the noncentrality parameter for the omnibus "F"…
ERIC Educational Resources Information Center
Nordstokke, David W.; Zumbo, Bruno D.; Cairns, Sharon L.; Saklofske, Donald H.
2011-01-01
Many assessment and evaluation studies use statistical hypothesis tests, such as the independent samples t test or analysis of variance, to test the equality of two or more means for gender, age groups, cultures or language group comparisons. In addition, some, but far fewer, studies compare variability across these same groups or research…
Spencer, Michael
1974-01-01
Food additives are discussed from the food technology point of view. The reasons for their use are summarized: (1) to protect food from chemical and microbiological attack; (2) to even out seasonal supplies; (3) to improve their eating quality; (4) to improve their nutritional value. The various types of food additives are considered, e.g. colours, flavours, emulsifiers, bread and flour additives, preservatives, and nutritional additives. The paper concludes with consideration of those circumstances in which the use of additives is (a) justified and (b) unjustified. PMID:4467857
Variance analysis. Part II, The use of computers.
Finkler, S A
1991-09-01
This is the second in a two-part series on variance analysis. In the first article (JONA, July/August 1991), the author discussed flexible budgeting, including the calculation of price, quantity, volume, and acuity variances. In this second article, the author focuses on the use of computers by nurse managers to aid in the process of calculating, understanding, and justifying variances. PMID:1919788
Estimates of variances due to direct and maternal effects for growth traits of Romanov sheep.
María, G A; Boldman, K G; Van Vleck, L D
1993-04-01
Records of growth traits of 2,086 Romanov lambs were used to estimate variance components for an animal model and genetic correlations between growth traits. Traits analyzed were birth weight (BWT), weaning weight (WW), 90-d weight (W90), and daily gain for the periods birth to weaning (DG1) and weaning to 90 d (DG2). Weaning was at approximately 40 d. Variance components were estimated using restricted maximum likelihood with an animal model including fixed effects for year x season, sex, rearing type, and litter size and random effects for the direct genetic effect of the animal (with relative variance h2), the maternal genetic effect (with relative variance m2), the permanent environmental effect (with relative variance c2), and random residual effect. Genetic correlations were estimated for a model with the same fixed effects and only additive genetic effects. Estimates of the variances of random effects, h2, m2, and c2, respectively, as a proportion of phenotypic variance were .04, .22, .10 (BWT); .34, .25, .0 (WW); .09, .01, .07 (W90); .26, .17, .02 (DG1); and .15, .01, .03 (DG2). Estimates of genetic correlations were .12 (BWT with WW); .24 (BWT with W90); .48 (WW with W90); .69 (DG1 with DG2); -.01 (BWT with DG1); .05 (BWT with DG2); .59 (WW with DG1); .47 (WW with DG2); .67 (W90 with DG1); and .98 (W90 with DG2). Results suggest that selection should be effective for WW, DG1, and DG2 but less effective for BWT and W90. An important maternal effect was observed for BWT, WW, and DG1. The estimates of genetic correlations showed no genetic antagonisms among the traits. PMID:8478286
Decomposing genomic variance using information from GWA, GWE and eQTL analysis.
Ehsani, A; Janss, L; Pomp, D; Sørensen, P
2016-04-01
A commonly used procedure in genome-wide association (GWA), genome-wide expression (GWE) and expression quantitative trait locus (eQTL) analyses is based on a bottom-up experimental approach that attempts to individually associate molecular variants with complex traits. Top-down modeling of the entire set of genomic data and partitioning of the overall variance into subcomponents may provide further insight into the genetic basis of complex traits. To test this approach, we performed a whole-genome variance components analysis and partitioned the genomic variance using information from GWA, GWE and eQTL analyses of growth-related traits in a mouse F2 population. We characterized the mouse trait genetic architecture by ordering single nucleotide polymorphisms (SNPs) based on their P-values and studying the areas under the curve (AUCs). The observed traits were found to have a genomic variance profile that differed significantly from that expected of a trait under an infinitesimal model. This situation was particularly true for both body weight and body fat, for which the AUCs were much higher compared with that of glucose. In addition, SNPs with a high degree of trait-specific regulatory potential (SNPs associated with subset of transcripts that significantly associated with a specific trait) explained a larger proportion of the genomic variance than did SNPs with high overall regulatory potential (SNPs associated with transcripts using traditional eQTL analysis). We introduced AUC measures of genomic variance profiles that can be used to quantify relative importance of SNPs as well as degree of deviation of a trait's inheritance from an infinitesimal model. The shape of the curve aids global understanding of traits: The steeper the left-hand side of the curve, the fewer the number of SNPs controlling most of the phenotypic variance. PMID:26678352
Lande, Russell; Porcher, Emmanuelle
2015-01-01
We analyze two models of the maintenance of quantitative genetic variance in a mixed-mating system of self-fertilization and outcrossing. In both models purely additive genetic variance is maintained by mutation and recombination under stabilizing selection on the phenotype of one or more quantitative characters. The Gaussian allele model (GAM) involves a finite number of unlinked loci in an infinitely large population, with a normal distribution of allelic effects at each locus within lineages selfed for τ consecutive generations since their last outcross. The infinitesimal model for partial selfing (IMS) involves an infinite number of loci in a large but finite population, with a normal distribution of breeding values in lineages of selfing age τ. In both models a stable equilibrium genetic variance exists, the outcrossed equilibrium, nearly equal to that under random mating, for all selfing rates, r, up to critical value, r^, the purging threshold, which approximately equals the mean fitness under random mating relative to that under complete selfing. In the GAM a second stable equilibrium, the purged equilibrium, exists for any positive selfing rate, with genetic variance less than or equal to that under pure selfing; as r increases above r^ the outcrossed equilibrium collapses sharply to the purged equilibrium genetic variance. In the IMS a single stable equilibrium genetic variance exists at each selfing rate; as r increases above r^ the equilibrium genetic variance drops sharply and then declines gradually to that maintained under complete selfing. The implications for evolution of selfing rates, and for adaptive evolution and persistence of predominantly selfing species, provide a theoretical basis for the classical view of Stebbins that predominant selfing constitutes an “evolutionary dead end.” PMID:25969460
Lande, Russell; Porcher, Emmanuelle
2015-07-01
We analyze two models of the maintenance of quantitative genetic variance in a mixed-mating system of self-fertilization and outcrossing. In both models purely additive genetic variance is maintained by mutation and recombination under stabilizing selection on the phenotype of one or more quantitative characters. The Gaussian allele model (GAM) involves a finite number of unlinked loci in an infinitely large population, with a normal distribution of allelic effects at each locus within lineages selfed for τ consecutive generations since their last outcross. The infinitesimal model for partial selfing (IMS) involves an infinite number of loci in a large but finite population, with a normal distribution of breeding values in lineages of selfing age τ. In both models a stable equilibrium genetic variance exists, the outcrossed equilibrium, nearly equal to that under random mating, for all selfing rates, r, up to critical value, [Formula: see text], the purging threshold, which approximately equals the mean fitness under random mating relative to that under complete selfing. In the GAM a second stable equilibrium, the purged equilibrium, exists for any positive selfing rate, with genetic variance less than or equal to that under pure selfing; as r increases above [Formula: see text] the outcrossed equilibrium collapses sharply to the purged equilibrium genetic variance. In the IMS a single stable equilibrium genetic variance exists at each selfing rate; as r increases above [Formula: see text] the equilibrium genetic variance drops sharply and then declines gradually to that maintained under complete selfing. The implications for evolution of selfing rates, and for adaptive evolution and persistence of predominantly selfing species, provide a theoretical basis for the classical view of Stebbins that predominant selfing constitutes an "evolutionary dead end." PMID:25969460
Network Structure and Biased Variance Estimation in Respondent Driven Sampling
Verdery, Ashton M.; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J.
2015-01-01
This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927
Multiperiod Mean-Variance Portfolio Optimization via Market Cloning
Ankirchner, Stefan; Dermoune, Azzouz
2011-08-15
The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.
Network Structure and Biased Variance Estimation in Respondent Driven Sampling.
Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J
2015-01-01
This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927
Broyles, R W; Lay, C M
1982-12-01
This paper examines an unfavorable cost variance in an institution which employs multiple resources to provide stay specific and ancillary services to patients presenting multiple diagnoses. It partitions the difference between actual and expected costs into components that are the responsibility of an identifiable individual or group of individuals. The analysis demonstrates that the components comprising an unfavorable cost variance are attributable to factor prices, the use of real resources, the mix of patients, and the composition of care provided by the institution. In addition, the interactive effects of these factors are also identified. PMID:7183731
A NEW VARIANCE ESTIMATOR FOR PARAMETERS OF SEMI-PARAMETRIC GENERALIZED ADDITIVE MODELS. (R829213)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
On discrete stochastic processes with long-lasting time dependence in the variance
NASA Astrophysics Data System (ADS)
Queirós, S. M. D.
2008-11-01
In this manuscript, we analytically and numerically study statistical properties of an heteroskedastic process based on the celebrated ARCH generator of random variables whose variance is defined by a memory of qm-exponencial, form (eqm=1 x=ex). Specifically, we inspect the self-correlation function of squared random variables as well as the kurtosis. In addition, by numerical procedures, we infer the stationary probability density function of both of the heteroskedastic random variables and the variance, the multiscaling properties, the first-passage times distribution, and the dependence degree. Finally, we introduce an asymmetric variance version of the model that enables us to reproduce the so-called leverage effect in financial markets.
NASA Technical Reports Server (NTRS)
Longman, Richard W.; Bergmann, Martin; Juang, Jer-Nan
1988-01-01
For the ERA system identification algorithm, perturbation methods are used to develop expressions for variance and bias of the identified modal parameters. Based on the statistics of the measurement noise, the variance results serve as confidence criteria by indicating how likely the true parameters are to lie within any chosen interval about their identified values. This replaces the use of expensive and time-consuming Monte Carlo computer runs to obtain similar information. The bias estimates help guide the ERA user in his choice of which data points to use and how much data to use in order to obtain the best results, performing the trade-off between the bias and scatter. Also, when the uncertainty in the bias is sufficiently small, the bias information can be used to correct the ERA results. In addition, expressions for the variance and bias of the singular values serve as tools to help the ERA user decide the proper modal order.
Bogaerts, Louisa; Siegelman, Noam; Frost, Ram
2016-08-01
What determines individuals' efficacy in detecting regularities in visual statistical learning? Our theoretical starting point assumes that the variance in performance of statistical learning (SL) can be split into the variance related to efficiency in encoding representations within a modality and the variance related to the relative computational efficiency of detecting the distributional properties of the encoded representations. Using a novel methodology, we dissociated encoding from higher-order learning factors, by independently manipulating exposure duration and transitional probabilities in a stream of visual shapes. Our results show that the encoding of shapes and the retrieving of their transitional probabilities are not independent and additive processes, but interact to jointly determine SL performance. The theoretical implications of these findings for a mechanistic explanation of SL are discussed. PMID:26743060
Image denoising via Bayesian estimation of local variance with Maxwell density prior
NASA Astrophysics Data System (ADS)
Kittisuwan, Pichid
2015-10-01
The need for efficient image denoising methods has grown with the massive production of digital images and movies of all kinds. The distortion of images by additive white Gaussian noise (AWGN) is common during its processing and transmission. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. Indeed, one of the cruxes of the Bayesian image denoising algorithms is to estimate the local variance of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with Maxwell density prior for local observed variance and Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by analytical and computational tractability. The experimental results show that the proposed method yields good denoising results.
Uniqueness of the momentum map
NASA Astrophysics Data System (ADS)
Esposito, Chiara; Nest, Ryszard
2016-08-01
We give a detailed discussion of existence and uniqueness of the momentum map associated to Poisson Lie actions, which was defined by Lu. We introduce a weaker notion of momentum map, called infinitesimal momentum map, which is defined on one-forms and we analyze its integrability to the Lu's momentum map. Finally, the uniqueness of the Lu's momentum map is studied by describing, explicitly, the tangent space to the space of momentum maps.
The Placenta Harbors a Unique Microbiome
Aagaard, Kjersti; Ma, Jun; Antony, Kathleen M.; Ganu, Radhika; Petrosino, Joseph; Versalovic, James
2016-01-01
Humans and their microbiomes have coevolved as a physiologic community composed of distinct body site niches with metabolic and antigenic diversity. The placental microbiome has not been robustly interrogated, despite recent demonstrations of intracellular bacteria with diverse metabolic and immune regulatory functions. A population-based cohort of placental specimens collected under sterile conditions from 320 subjects with extensive clinical data was established for comparative 16S ribosomal DNA–based and whole-genome shotgun (WGS) metagenomic studies. Identified taxa and their gene carriage patterns were compared to other human body site niches, including the oral, skin, airway (nasal), vaginal, and gut microbiomes from nonpregnant controls. We characterized a unique placental microbiome niche, composed of nonpathogenic commensal microbiota from the Firmicutes, Tenericutes, Proteobacteria, Bacteroidetes, and Fusobacteria phyla. In aggregate, the placental microbiome profiles were most akin (Bray-Curtis dissimilarity <0.3) to the human oral microbiome. 16S-based operational taxonomic unit analyses revealed associations of the placental microbiome with a remote history of antenatal infection (permutational multivariate analysis of variance, P = 0.006), such as urinary tract infection in the first trimester, as well as with preterm birth <37 weeks (P = 0.001). PMID:24848255
ERIC Educational Resources Information Center
Castellanos-Ryan, Natalie; Conrod, Patricia J.
2011-01-01
Externalising behaviours such as substance misuse (SM) and conduct disorder (CD) symptoms highly co-ocurr in adolescence. While disinhibited personality traits have been consistently linked to externalising behaviours there is evidence that these traits may relate differentially to SM and CD. The current study aimed to assess whether this was the…
Uniqueness of place: uniqueness of models. The FLEX modelling approach
NASA Astrophysics Data System (ADS)
Fenicia, F.; Savenije, H. H. G.; Wrede, S.; Schoups, G.; Pfister, L.
2009-04-01
The current practice in hydrological modelling is to make use of model structures that are fixed and a-priori defined. However, for a model to reflect uniqueness of place while maintaining parsimony, it is necessary to be flexible in its architecture. We have developed a new approach for the development and testing of hydrological models, named the FLEX approach. This approach allows the formulation of alternative model structures that vary in configuration and complexity, and uses an objective method for testing and comparing model performance. We have tested this approach on three headwater catchments in Luxembourg with marked differences in hydrological response, where we have generated 15 alternative model structures. Each of the three catchments is best represented by a different model architecture. Our results clearly show that uniqueness of place necessarily leads to uniqueness of models.
7 CFR 718.105 - Tolerances, variances, and adjustments.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 7 2014-01-01 2014-01-01 false Tolerances, variances, and adjustments. 718.105 Section 718.105 Agriculture Regulations of the Department of Agriculture (Continued) FARM SERVICE AGENCY... APPLICABLE TO MULTIPLE PROGRAMS Determination of Acreage and Compliance § 718.105 Tolerances, variances,...
7 CFR 718.105 - Tolerances, variances, and adjustments.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 7 2012-01-01 2012-01-01 false Tolerances, variances, and adjustments. 718.105 Section 718.105 Agriculture Regulations of the Department of Agriculture (Continued) FARM SERVICE AGENCY... APPLICABLE TO MULTIPLE PROGRAMS Determination of Acreage and Compliance § 718.105 Tolerances, variances,...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 36 Parks, Forests, and Public Property 1 2013-07-01 2013-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 36 Parks, Forests, and Public Property 1 2014-07-01 2014-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 36 Parks, Forests, and Public Property 1 2011-07-01 2011-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 36 Parks, Forests, and Public Property 1 2012-07-01 2012-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
Variances and Covariances of Kendall's Tau and Their Estimation.
ERIC Educational Resources Information Center
Cliff, Norman; Charlin, Ventura
1991-01-01
Variance formulas of H. E. Daniels and M. G. Kendall (1947) are generalized to allow for the presence of ties and variance of the sample tau correlation. Applications of these generalized formulas are discussed and illustrated using data from a 1965 study of contraceptive use in 15 developing countries. (SLD)
Characterizing the evolution of genetic variance using genetic covariance tensors.
Hine, Emma; Chenoweth, Stephen F; Rundle, Howard D; Blows, Mark W
2009-06-12
Determining how genetic variance changes under selection in natural populations has proved to be a very resilient problem in evolutionary genetics. In the same way that understanding the availability of genetic variance within populations requires the simultaneous consideration of genetic variance in sets of functionally related traits, determining how genetic variance changes under selection in natural populations will require ascertaining how genetic variance-covariance (G) matrices evolve. Here, we develop a geometric framework using higher-order tensors, which enables the empirical characterization of how G matrices have diverged among populations. We then show how divergence among populations in genetic covariance structure can then be associated with divergence in selection acting on those traits using key equations from evolutionary theory. Using estimates of G matrices of eight male sexually selected traits from nine geographical populations of Drosophila serrata, we show that much of the divergence in genetic variance occurred in a single trait combination, a conclusion that could not have been reached by examining variation among the individual elements of the nine G matrices. Divergence in G was primarily in the direction of the major axes of genetic variance within populations, suggesting that genetic drift may be a major cause of divergence in genetic variance among these populations. PMID:19414471
40 CFR 52.1390 - Missoula variance provision.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 4 2014-07-01 2014-07-01 false Missoula variance provision. 52.1390 Section 52.1390 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) Montana § 52.1390 Missoula variance provision. The Missoula City-County...
29 CFR 1904.38 - Variances from the recordkeeping rule.
Code of Federal Regulations, 2010 CFR
2010-07-01
... process your variance petition. (i) The Assistant Secretary will offer your employees and their authorized... the facts or conduct that may warrant revocation of your variance; and (ii) Provide you, your employees, and authorized employee representatives with an opportunity to participate in the...
Productive Failure in Learning the Concept of Variance
ERIC Educational Resources Information Center
Kapur, Manu
2012-01-01
In a study with ninth-grade mathematics students on learning the concept of variance, students experienced either direct instruction (DI) or productive failure (PF), wherein they were first asked to generate a quantitative index for variance without any guidance before receiving DI on the concept. Whereas DI students relied only on the canonical…
10 CFR 52.93 - Exemptions and variances.
Code of Federal Regulations, 2010 CFR
2010-01-01
... CFR 52.7, and that the special circumstances outweigh any decrease in safety that may result from the... 10 Energy 2 2010-01-01 2010-01-01 false Exemptions and variances. 52.93 Section 52.93 Energy... Combined Licenses § 52.93 Exemptions and variances. (a) Applicants for a combined license under...
Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances
ERIC Educational Resources Information Center
Jan, Show-Li; Shieh, Gwowen
2014-01-01
The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…
A Study of Variance Estimation Methods. Working Paper Series.
ERIC Educational Resources Information Center
Zhang, Fan; Weng, Stanley; Salvucci, Sameena; Hu, Ming-xiu
This working paper contains reports of five studies of variance estimation methods. The first, An Empirical Study of Poststratified Estimator, by Fan Zhang uses data from the National Household Education Survey to illustrate use of poststratified estimation. The second paper, BRR Variance Estimation Using BPLX Hadamard Procedure, by Stanley Weng…
21 CFR 821.2 - Exemptions and variances.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Exemptions and variances. 821.2 Section 821.2 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES MEDICAL DEVICE TRACKING REQUIREMENTS General Provisions § 821.2 Exemptions and variances. (a) A manufacturer, importer, or distributor...
40 CFR 142.40 - Requirements for a variance.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Section 142.40 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator... one or more variances to any public water system within a State that does not have primary...
40 CFR 142.43 - Disposition of a variance request.
Code of Federal Regulations, 2010 CFR
2010-07-01
....43 Section 142.43 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the... variance may be terminated at any time upon a finding that the nature of the raw water source is such...
40 CFR 142.43 - Disposition of a variance request.
Code of Federal Regulations, 2011 CFR
2011-07-01
....43 Section 142.43 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the... variance may be terminated at any time upon a finding that the nature of the raw water source is such...
40 CFR 142.43 - Disposition of a variance request.
Code of Federal Regulations, 2012 CFR
2012-07-01
....43 Section 142.43 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the... variance may be terminated at any time upon a finding that the nature of the raw water source is such...
40 CFR 142.43 - Disposition of a variance request.
Code of Federal Regulations, 2013 CFR
2013-07-01
....43 Section 142.43 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the... variance may be terminated at any time upon a finding that the nature of the raw water source is such...
An efficient method to evaluate energy variances for extrapolation methods
NASA Astrophysics Data System (ADS)
Puddu, G.
2012-08-01
The energy variance extrapolation method consists of relating the approximate energies in many-body calculations to the corresponding energy variances and inferring eigenvalues by extrapolating to zero variance. The method needs a fast evaluation of the energy variances. For many-body methods that expand the nuclear wavefunctions in terms of deformed Slater determinants, the best available method for the evaluation of energy variances scales with the sixth power of the number of single-particle states. We propose a new method which depends on the number of single-particle orbits and the number of particles rather than the number of single-particle states. We discuss as an example the case of 4He using the chiral N3LO interaction in a basis consisting up to 184 single-particle states.
NASA Astrophysics Data System (ADS)
García-Pareja, S.; Vilches, M.; Lallena, A. M.
2007-09-01
The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the "hot" regions of the accelerator, an information which is basic to develop a source model for this therapy tool.
Utility functions predict variance and skewness risk preferences in monkeys
Genest, Wilfried; Stauffer, William R.; Schultz, Wolfram
2016-01-01
Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals’ preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals’ preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys’ choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences. PMID:27402743
Variance After-Effects Distort Risk Perception in Humans.
Payzan-LeNestour, Elise; Balleine, Bernard W; Berrada, Tony; Pearson, Joel
2016-06-01
In many contexts, decision-making requires an accurate representation of outcome variance-otherwise known as "risk" in economics. Conventional economic theory assumes this representation to be perfect, thereby focusing on risk preferences rather than risk perception per se [1-3] (but see [4]). However, humans often misrepresent their physical environment. Perhaps the most striking of such misrepresentations are the many well-known sensory after-effects, which most commonly involve visual properties, such as color, contrast, size, and motion. For example, viewing downward motion of a waterfall induces the anomalous biased experience of upward motion during subsequent viewing of static rocks to the side [5]. Given that after-effects are pervasive, occurring across a wide range of time horizons [6] and stimulus dimensions (including properties such as face perception [7, 8], gender [9], and numerosity [10]), and that some evidence exists that neurons show adaptation to variance in the sole visual feature of motion [11], we were interested in assessing whether after-effects distort variance perception in humans. We found that perceived variance is decreased after prolonged exposure to high variance and increased after exposure to low variance within a number of different visual representations of variance. We demonstrate these after-effects occur across very different visual representations of variance, suggesting that these effects are not sensory, but operate at a high (cognitive) level of information processing. These results suggest, therefore, that variance constitutes an independent cognitive property and that prolonged exposure to extreme variance distorts risk perception-a fundamental challenge for economic theory and practice. PMID:27161500
Berglund, F
1978-01-01
The use of additives to food fulfils many purposes, as shown by the index issued by the Codex Committee on Food Additives: Acids, bases and salts; Preservatives, Antioxidants and antioxidant synergists; Anticaking agents; Colours; Emulfifiers; Thickening agents; Flour-treatment agents; Extraction solvents; Carrier solvents; Flavours (synthetic); Flavour enhancers; Non-nutritive sweeteners; Processing aids; Enzyme preparations. Many additives occur naturally in foods, but this does not exclude toxicity at higher levels. Some food additives are nutrients, or even essential nutritents, e.g. NaCl. Examples are known of food additives causing toxicity in man even when used according to regulations, e.g. cobalt in beer. In other instances, poisoning has been due to carry-over, e.g. by nitrate in cheese whey - when used for artificial feed for infants. Poisonings also occur as the result of the permitted substance being added at too high levels, by accident or carelessness, e.g. nitrite in fish. Finally, there are examples of hypersensitivity to food additives, e.g. to tartrazine and other food colours. The toxicological evaluation, based on animal feeding studies, may be complicated by impurities, e.g. orthotoluene-sulfonamide in saccharin; by transformation or disappearance of the additive in food processing in storage, e.g. bisulfite in raisins; by reaction products with food constituents, e.g. formation of ethylurethane from diethyl pyrocarbonate; by metabolic transformation products, e.g. formation in the gut of cyclohexylamine from cyclamate. Metabolic end products may differ in experimental animals and in man: guanylic acid and inosinic acid are metabolized to allantoin in the rat but to uric acid in man. The magnitude of the safety margin in man of the Acceptable Daily Intake (ADI) is not identical to the "safety factor" used when calculating the ADI. The symptoms of Chinese Restaurant Syndrome, although not hazardous, furthermore illustrate that the whole ADI
Code of Federal Regulations, 2014 CFR
2014-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
Code of Federal Regulations, 2013 CFR
2013-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
Code of Federal Regulations, 2012 CFR
2012-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
Code of Federal Regulations, 2011 CFR
2011-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
Code of Federal Regulations, 2010 CFR
2010-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
The liberal illusion of uniqueness.
Stern, Chadly; West, Tessa V; Schmitt, Peter G
2014-01-01
In two studies, we demonstrated that liberals underestimate their similarity to other liberals (i.e., display truly false uniqueness), whereas moderates and conservatives overestimate their similarity to other moderates and conservatives (i.e., display truly false consensus; Studies 1 and 2). We further demonstrated that a fundamental difference between liberals and conservatives in the motivation to feel unique explains this ideological distinction in the accuracy of estimating similarity (Study 2). Implications of the accuracy of consensus estimates for mobilizing liberal and conservative political movements are discussed. PMID:24247730
Meta-analysis of ratios of sample variances.
Prendergast, Luke A; Staudte, Robert G
2016-05-20
When conducting a meta-analysis of standardized mean differences (SMDs), it is common to use Cohen's d, or its variants, that require equal variances in the two arms of each study. While interpretation of these SMDs is simple, this alone should not be used as a justification for assuming equal variances. Until now, researchers have either used an F-test for each individual study or perhaps even conveniently ignored such tools altogether. In this paper, we propose a meta-analysis of ratios of sample variances to assess whether the equality of variances assumptions is justified prior to a meta-analysis of SMDs. Quantile-quantile plots, an omnibus test for equal variances or an overall meta-estimate of the ratio of variances can all be used to formally justify the use of less common methods when evidence of unequal variances is found. The methods in this paper are simple to implement and the validity of the approaches are reinforced by simulation studies and an application to a real data set. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27062644
A note on preliminary tests of equality of variances.
Zimmerman, Donald W
2004-05-01
Preliminary tests of equality of variances used before a test of location are no longer widely recommended by statisticians, although they persist in some textbooks and software packages. The present study extends the findings of previous studies and provides further reasons for discontinuing the use of preliminary tests. The study found Type I error rates of a two-stage procedure, consisting of a preliminary Levene test on samples of different sizes with unequal variances, followed by either a Student pooled-variances t test or a Welch separate-variances t test. Simulations disclosed that the twostage procedure fails to protect the significance level and usually makes the situation worse. Earlier studies have shown that preliminary tests often adversely affect the size of the test, and also that the Welch test is superior to the t test when variances are unequal. The present simulations reveal that changes in Type I error rates are greater when sample sizes are smaller, when the difference in variances is slight rather than extreme, and when the significance level is more stringent. Furthermore, the validity of the Welch test deteriorates if it is used only on those occasions where a preliminary test indicates it is needed. Optimum protection is assured by using a separate-variances test unconditionally whenever sample sizes are unequal. PMID:15171807
Variance Estimation for Myocardial Blood Flow by Dynamic PET.
Moody, Jonathan B; Murthy, Venkatesh L; Lee, Benjamin C; Corbett, James R; Ficaro, Edward P
2015-11-01
The estimation of myocardial blood flow (MBF) by (13)N-ammonia or (82)Rb dynamic PET typically relies on an empirically determined generalized Renkin-Crone equation to relate the kinetic parameter K1 to MBF. Because the Renkin-Crone equation defines MBF as an implicit function of K1, the MBF variance cannot be determined using standard error propagation techniques. To overcome this limitation, we derived novel analytical approximations that provide first- and second-order estimates of MBF variance in terms of the mean and variance of K1 and the Renkin-Crone parameters. The accuracy of the analytical expressions was validated by comparison with Monte Carlo simulations, and MBF variance was evaluated in clinical (82)Rb dynamic PET scans. For both (82)Rb and (13)N-ammonia, good agreement was observed between both (first- and second-order) analytical variance expressions and Monte Carlo simulations, with moderately better agreement for second-order estimates. The contribution of the Renkin-Crone relation to overall MBF uncertainty was found to be as high as 68% for (82)Rb and 35% for (13)N-ammonia. For clinical (82)Rb PET data, the conventional practice of neglecting the statistical uncertainty in the Renkin-Crone parameters resulted in underestimation of the coefficient of variation of global MBF and coronary flow reserve by 14-49%. Knowledge of MBF variance is essential for assessing the precision and reliability of MBF estimates. The form and statistical uncertainty in the empirical Renkin-Crone relation can make substantial contributions to the variance of MBF. The novel analytical variance expressions derived in this work enable direct estimation of MBF variance which includes this previously neglected contribution. PMID:25974932
Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation
NASA Technical Reports Server (NTRS)
Wu, Dong L.; Eckermann, Stephen D.
2008-01-01
The gravity wave (GW)-resolving capabilities of 118-GHz saturated thermal radiances acquired throughout the stratosphere by the Microwave Limb Sounder (MLS) on the Aura satellite are investigated and initial results presented. Because the saturated (optically thick) radiances resolve GW perturbations from a given altitude at different horizontal locations, variances are evaluated at 12 pressure altitudes between 21 and 51 km using the 40 saturated radiances found at the bottom of each limb scan. Forward modeling simulations show that these variances are controlled mostly by GWs with vertical wavelengths z 5 km and horizontal along-track wavelengths of y 100-200 km. The tilted cigar-shaped three-dimensional weighting functions yield highly selective responses to GWs of high intrinsic frequency that propagate toward the instrument. The latter property is used to infer the net meridional component of GW propagation by differencing the variances acquired from ascending (A) and descending (D) orbits. Because of improved vertical resolution and sensitivity, Aura MLS GW variances are 5?8 times larger than those from the Upper Atmosphere Research Satellite (UARS) MLS. Like UARS MLS variances, monthly-mean Aura MLS variances in January and July 2005 are enhanced when local background wind speeds are large, due largely to GW visibility effects. Zonal asymmetries in variance maps reveal enhanced GW activity at high latitudes due to forcing by flow over major mountain ranges and at tropical and subtropical latitudes due to enhanced deep convective generation as inferred from contemporaneous MLS cloud-ice data. At 21-28-km altitude (heights not measured by the UARS MLS), GW variance in the tropics is systematically enhanced and shows clear variations with the phase of the quasi-biennial oscillation, in general agreement with GW temperature variances derived from radiosonde, rocketsonde, and limb-scan vertical profiles.
Speckle-scale focusing in the diffusive regime with time reversal of variance-encoded light (TROVE)
NASA Astrophysics Data System (ADS)
Judkewitz, Benjamin; Wang, Ying Min; Horstmeyer, Roarke; Mathy, Alexandre; Yang, Changhuei
2013-04-01
Focusing of light in the diffusive regime inside scattering media has long been considered impossible. Recently, this limitation has been overcome with time reversal of ultrasound-encoded light (TRUE), but the resolution of this approach is fundamentally limited by the large number of optical modes within the ultrasound focus. Here, we introduce a new approach, time reversal of variance-encoded light (TROVE), which demixes these spatial modes by variance encoding to break the resolution barrier imposed by the ultrasound. By encoding individual spatial modes inside the scattering sample with unique variances, we effectively uncouple the system resolution from the size of the ultrasound focus. This enables us to demonstrate optical focusing and imaging with diffuse light at an unprecedented, speckle-scale lateral resolution of ~5 µm.
Uniquely identifying wheat plant structures
Technology Transfer Automated Retrieval System (TEKTRAN)
Uniquely naming wheat (Triticum aestivum L. em Thell) plant parts is useful for communicating plant development research and the effects of environmental stresses on normal wheat development. Over the past 30+ years, several naming systems have been proposed for wheat shoot, leaf, spike, spikelet, ...
Identity Foreclosure: A Unique Challenge
ERIC Educational Resources Information Center
Petitpas, Al
1978-01-01
Foreclosure occurs when individuals prematurely make a firm commitment to an occupation or an ideology. If the pressure of having an occupational identity can be eased, then it may be possible to establish an environment in which foreclosed students could move toward the consolidation of their unique identities. (Author)
... Multiple Health Problems Prevention Join our e-newsletter! Aging & Health A to Z COPD Unique to Older Adults This section provides information ... not a weakness or a normal part of aging. Most people feel better with ... help you can, so that your COPD does not prevent you from living your life ...
Time Variability of Quasars: the Structure Function Variance
NASA Astrophysics Data System (ADS)
MacLeod, C.; Ivezić, Ž.; de Vries, W.; Sesar, B.; Becker, A.
2008-12-01
Significant progress in the description of quasar variability has been recently made by employing SDSS and POSS data. Common to most studies is a fundamental assumption that photometric observations at two epochs for a large number of quasars will reveal the same statistical properties as well-sampled light curves for individual objects. We critically test this assumption using light curves for a sample of ~2,600 spectroscopically confirmed quasars observed about 50 times on average over 8 years by the SDSS stripe 82 survey. We find that the dependence of the mean structure function computed for individual quasars on luminosity, rest-frame wavelength and time is qualitatively and quantitatively similar to the behavior of the structure function derived from two-epoch observations of a much larger sample. We also reproduce the result that the variability properties of radio and X-ray selected subsamples are different. However, the scatter of the variability structure function for fixed values of luminosity, rest-frame wavelength and time is similar to the scatter induced by the variance of these quantities in the analyzed sample. Hence, our results suggest that, although the statistical properties of quasar variability inferred using two-epoch data capture some underlying physics, there is significant additional information that can be extracted from well-sampled light curves for individual objects.
Mesoscale Gravity Wave Variances from AMSU-A Radiances
NASA Technical Reports Server (NTRS)
Wu, Dong L.
2004-01-01
A variance analysis technique is developed here to extract gravity wave (GW) induced temperature fluctuations from NOAA AMSU-A (Advanced Microwave Sounding Unit-A) radiance measurements. By carefully removing the instrument/measurement noise, the algorithm can produce reliable GW variances with the minimum detectable value as small as 0.1 K2. Preliminary analyses with AMSU-A data show GW variance maps in the stratosphere have very similar distributions to those found with the UARS MLS (Upper Atmosphere Research Satellite Microwave Limb Sounder). However, the AMSU-A offers better horizontal and temporal resolution for observing regional GW variability, such as activity over sub-Antarctic islands.
Rudolf Keller
2004-08-10
In this project, a concept to improve the performance of aluminum production cells by introducing potlining additives was examined and tested. Boron oxide was added to cathode blocks, and titanium was dissolved in the metal pool; this resulted in the formation of titanium diboride and caused the molten aluminum to wet the carbonaceous cathode surface. Such wetting reportedly leads to operational improvements and extended cell life. In addition, boron oxide suppresses cyanide formation. This final report presents and discusses the results of this project. Substantial economic benefits for the practical implementation of the technology are projected, especially for modern cells with graphitized blocks. For example, with an energy savings of about 5% and an increase in pot life from 1500 to 2500 days, a cost savings of $ 0.023 per pound of aluminum produced is projected for a 200 kA pot.
Harrup, Mason K; Rollins, Harry W
2013-11-26
An additive comprising a phosphazene compound that has at least two reactive functional groups and at least one capping functional group bonded to phosphorus atoms of the phosphazene compound. One of the at least two reactive functional groups is configured to react with cellulose and the other of the at least two reactive functional groups is configured to react with a resin, such as an amine resin of a polycarboxylic acid resin. The at least one capping functional group is selected from the group consisting of a short chain ether group, an alkoxy group, or an aryloxy group. Also disclosed are an additive-resin admixture, a method of treating a wood product, and a wood product.
Some variance reduction methods for numerical stochastic homogenization.
Blanc, X; Le Bris, C; Legoll, F
2016-04-28
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. PMID:27002065
RISK ANALYSIS, ANALYSIS OF VARIANCE: GETTING MORE FROM OUR DATA
Technology Transfer Automated Retrieval System (TEKTRAN)
Analysis of variance (ANOVA) and regression are common statistical techniques used to analyze agronomic experimental data and determine significant differences among yields due to treatments or other experimental factors. Risk analysis provides an alternate and complimentary examination of the same...
40 CFR 142.42 - Consideration of a variance request.
Code of Federal Regulations, 2010 CFR
2010-07-01
... contaminant level required by the national primary drinking water regulations because of the nature of the raw... effectiveness of treatment methods for the contaminant for which the variance is requested. (2) Cost and...
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... subparts H, P, S, T, W, and Y of this part. ... total coliforms and E. coli and variances from any of the treatment technique requirements of subpart H... Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER...
Theorems on Positive Data: On the Uniqueness of NMF
Laurberg, Hans; Christensen, Mads Græsbøll; Plumbley, Mark D.; Hansen, Lars Kai; Jensen, Søren Holdt
2008-01-01
We investigate the conditions for which nonnegative matrix factorization (NMF) is unique and introduce several theorems which can determine whether the decomposition is in fact unique or not. The theorems are illustrated by several examples showing the use of the theorems and their limitations. We have shown that corruption of a unique NMF matrix by additive noise leads to a noisy estimation of the noise-free unique solution. Finally, we use a stochastic view of NMF to analyze which characterization of the underlying model will result in an NMF with small estimation errors. PMID:18497868
A multicomb variance reduction scheme for Monte Carlo semiconductor simulators
Gray, M.G.; Booth, T.E.; Kwan, T.J.T.; Snell, C.M.
1998-04-01
The authors adapt a multicomb variance reduction technique used in neutral particle transport to Monte Carlo microelectronic device modeling. They implement the method in a two-dimensional (2-D) MOSFET device simulator and demonstrate its effectiveness in the study of hot electron effects. The simulations show that the statistical variance of hot electrons is significantly reduced with minimal computational cost. The method is efficient, versatile, and easy to implement in existing device simulators.
Not Available
1985-06-01
Consafe is now using a computer-aided design and drafting system adapting its multipurpose support vessels (MSVS) to specific user requirements. The vessels are based on the concept of standard container modules adapted into living quarters, workshops, service units, offices with each application for a specific project demanding a unique mix. There is also the need for constant refurbishment program as service conditions take their toll on the modules. The computer-aided design system is described.
Regional Variance in Novice Perceptions of Hurricanes
NASA Astrophysics Data System (ADS)
Arthurs, L.; Van Den Broeke, M.
2013-12-01
In order to assess novice understandings of hurricane formation prior to explicit instruction on the topic, a two-question open-ended survey was administered to 337 students enrolled in introductory college-level geoscience courses in Georgia (n=169) and Nebraska (n=168) . Respondents explained in their own words how they think hurricanes form and sketched diagrams that complimented their textual descriptions. The authors developed and iteratively refined a coding rubric for the non-segmented data (whole response). Two raters independently applied this rubric to the entire data set with an initial inter-rater reliability of 71%, and of 100% after discussion of the initially mismatched codes. In addition, responses were segmented and analyzed for common content features. Textual and diagrammatic analyses of responses indicated a broad range of student ideas about hurricane formation, from more novice-like to more expert-like. These findings can assist the design of instructional materials, such as lecture tutorials, that address student misconceptions and facilitate conceptual learning.
On variance estimate for covariate adjustment by propensity score analysis.
Zou, Baiming; Zou, Fei; Shuster, Jonathan J; Tighe, Patrick J; Koch, Gary G; Zhou, Haibo
2016-09-10
Propensity score (PS) methods have been used extensively to adjust for confounding factors in the statistical analysis of observational data in comparative effectiveness research. There are four major PS-based adjustment approaches: PS matching, PS stratification, covariate adjustment by PS, and PS-based inverse probability weighting. Though covariate adjustment by PS is one of the most frequently used PS-based methods in clinical research, the conventional variance estimation of the treatment effects estimate under covariate adjustment by PS is biased. As Stampf et al. have shown, this bias in variance estimation is likely to lead to invalid statistical inference and could result in erroneous public health conclusions (e.g., food and drug safety and adverse events surveillance). To address this issue, we propose a two-stage analytic procedure to develop a valid variance estimator for the covariate adjustment by PS analysis strategy. We also carry out a simple empirical bootstrap resampling scheme. Both proposed procedures are implemented in an R function for public use. Extensive simulation results demonstrate the bias in the conventional variance estimator and show that both proposed variance estimators offer valid estimates for the true variance, and they are robust to complex confounding structures. The proposed methods are illustrated for a post-surgery pain study. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26999553
Analytic variance estimates of Swank and Fano factors
Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank
2014-07-15
Purpose: Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. Methods: The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data from a Monte Carlo simulation of an indirect x-ray imaging detector. Results: The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. Conclusions: The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.
Variance estimation for systematic designs in spatial surveys.
Fewster, R M
2011-12-01
In spatial surveys for estimating the density of objects in a survey region, systematic designs will generally yield lower variance than random designs. However, estimating the systematic variance is well known to be a difficult problem. Existing methods tend to overestimate the variance, so although the variance is genuinely reduced, it is over-reported, and the gain from the more efficient design is lost. The current approaches to estimating a systematic variance for spatial surveys are to approximate the systematic design by a random design, or approximate it by a stratified design. Previous work has shown that approximation by a random design can perform very poorly, while approximation by a stratified design is an improvement but can still be severely biased in some situations. We develop a new estimator based on modeling the encounter process over space. The new "striplet" estimator has negligible bias and excellent precision in a wide range of simulation scenarios, including strip-sampling, distance-sampling, and quadrat-sampling surveys, and including populations that are highly trended or have strong aggregation of objects. We apply the new estimator to survey data for the spotted hyena (Crocuta crocuta) in the Serengeti National Park, Tanzania, and find that the reported coefficient of variation for estimated density is 20% using approximation by a random design, 17% using approximation by a stratified design, and 11% using the new striplet estimator. This large reduction in reported variance is verified by simulation. PMID:21534940
NASA Astrophysics Data System (ADS)
Fijani, E.; Chitsazan, N.; Nadiri, A.; Tsai, F. T.; Asghari Moghaddam, A.
2012-12-01
Artificial Neural Networks (ANNs) have been widely used to estimate concentration of chemicals in groundwater systems. However, estimation uncertainty is rarely discussed in the literature. Uncertainty in ANN output stems from three sources: ANN inputs, ANN parameters (weights and biases), and ANN structures. Uncertainty in ANN inputs may come from input data selection and/or input data error. ANN parameters are naturally uncertain because they are maximum-likelihood estimated. ANN structure is also uncertain because there is no unique ANN model given a specific case. Therefore, multiple plausible AI models are generally resulted for a study. One might ask why good models have to be ignored in favor of the best model in traditional estimation. What is the ANN estimation variance? How do the variances from different ANN models accumulate to the total estimation variance? To answer these questions we propose a Hierarchical Bayesian Model Averaging (HBMA) framework. Instead of choosing one ANN model (the best ANN model) for estimation, HBMA averages outputs of all plausible ANN models. The model weights are based on the evidence of data. Therefore, the HBMA avoids overconfidence on the single best ANN model. In addition, HBMA is able to analyze uncertainty propagation through aggregation of ANN models in a hierarchy framework. This method is applied for estimation of fluoride concentration in the Poldasht plain and the Bazargan plain in Iran. Unusually high fluoride concentration in the Poldasht and Bazargan plains has caused negative effects on the public health. Management of this anomaly requires estimation of fluoride concentration distribution in the area. The results show that the HBMA provides a knowledge-decision-based framework that facilitates analyzing and quantifying ANN estimation uncertainties from different sources. In addition HBMA allows comparative evaluation of the realizations for each source of uncertainty by segregating the uncertainty sources in
NASA Technical Reports Server (NTRS)
Kotob, S.; Kaufman, H.
1976-01-01
An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.
Inter-vial Variance of the Sublimation Rate in Shelf Freeze-dryer
NASA Astrophysics Data System (ADS)
Kobayashi, Masakazu; Harashima, Konomi; Ariyama, Hiroichi; Yao, Ai-Ru
Significant inter-vial variance in the sublimation rate has been pointed out by several authors in relation to the placement of the vials on a well controlled shelf. All the previous reports have described the phenomena observed in the experiments or the production processes, and have made some suggestive remarks, but have not clearly proposed a solution to the problem. In the shelf freeze-drying of pharmaceuticals, one of the major ploblems is how to achieve inter-vial uniformity and batch to batch uniformity or consistency. In this study, We have developed a new model of laboratory-scale freeze-dryer which has temperature-controllable chamber walls, and using this new model we have analyzed causes of inter-vial variance in the sublimation rate. The higher sublimation rate for the vials placed on the shelf edge is due to additional heat input from the wall and also due to further additional heat from the shelf surface on which no kissing vial is placed. It is possible to cancel out the additional heat input from the shelf by maintaining an optimum wall temperature, which must be lower than the material temperature. This paper discusses a method for eliminating the inter-vail variance in drying conditions and shortening the drying time by means of chamber wall temperature control.
Ontogenetic changes in genetic variances of age-dependent plasticity along a latitudinal gradient.
Nilsson-Örtman, V; Rogell, B; Stoks, R; Johansson, F
2015-10-01
The expression of phenotypic plasticity may differ among life stages of the same organism. Age-dependent plasticity can be important for adaptation to heterogeneous environments, but this has only recently been recognized. Whether age-dependent plasticity is a common outcome of local adaptation and whether populations harbor genetic variation in this respect remains largely unknown. To answer these questions, we estimated levels of additive genetic variation in age-dependent plasticity in six species of damselflies sampled from 18 populations along a latitudinal gradient spanning 3600 km. We reared full sib larvae at three temperatures and estimated genetic variances in the height and slope of thermal reaction norms of body size at three points in time during ontogeny using random regression. Our data show that most populations harbor genetic variation in growth rate (reaction norm height) in all ontogenetic stages, but only some populations and ontogenetic stages were found to harbor genetic variation in thermal plasticity (reaction norm slope). Genetic variances in reaction norm height differed among species, while genetic variances in reaction norm slope differed among populations. The slope of the ontogenetic trend in genetic variances of both reaction norm height and slope increased with latitude. We propose that differences in genetic variances reflect temporal and spatial variation in the strength and direction of natural selection on growth trajectories and age-dependent plasticity. Selection on age-dependent plasticity may depend on the interaction between temperature seasonality and time constraints associated with variation in life history traits such as generation length. PMID:25649500
Estimation of Model Error Variances During Data Assimilation
NASA Technical Reports Server (NTRS)
Dee, Dick
2003-01-01
Data assimilation is all about understanding the error characteristics of the data and models that are used in the assimilation process. Reliable error estimates are needed to implement observational quality control, bias correction of observations and model fields, and intelligent data selection. Meaningful covariance specifications are obviously required for the analysis as well, since the impact of any single observation strongly depends on the assumed structure of the background errors. Operational atmospheric data assimilation systems still rely primarily on climatological background error covariances. To obtain error estimates that reflect both the character of the flow and the current state of the observing system, it is necessary to solve three problems: (1) how to account for the short-term evolution of errors in the initial conditions; (2) how to estimate the additional component of error caused by model defects; and (3) how to compute the error reduction in the analysis due to observational information. Various approaches are now available that provide approximate solutions to the first and third of these problems. However, the useful accuracy of these solutions very much depends on the size and character of the model errors and the ability to account for them. Model errors represent the real-world forcing of the error evolution in a data assimilation system. Clearly, meaningful model error estimates and/or statistics must be based on information external to the model itself. The most obvious information source is observational, and since the volume of available geophysical data is growing rapidly, there is some hope that a purely statistical approach to model error estimation can be viable. This requires that the observation errors themselves are well understood and quantifiable. We will discuss some of these challenges and present a new sequential scheme for estimating model error variances from observations in the context of an atmospheric data
Practice reduces task relevant variance modulation and forms nominal trajectory
NASA Astrophysics Data System (ADS)
Osu, Rieko; Morishige, Ken-Ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo
2015-12-01
Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary.
Practice reduces task relevant variance modulation and forms nominal trajectory.
Osu, Rieko; Morishige, Ken-ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo
2015-01-01
Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary. PMID:26639942
Detecting Pulsars with Interstellar Scintillation in Variance Images
NASA Astrophysics Data System (ADS)
Dai, S.; Johnston, S.; Bell, M. E.; Coles, W. A.; Hobbs, G.; Ekers, R. D.; Lenc, E.
2016-08-01
Pulsars are the only cosmic radio sources known to be sufficiently compact to show diffractive interstellar scintillations. Images of the variance of radio signals in both time and frequency can be used to detect pulsars in large-scale continuum surveys using the next generation of synthesis radio telescopes. This technique allows a search over the full field of view while avoiding the need for expensive pixel-by-pixel high time resolution searches. We investigate the sensitivity of detecting pulsars in variance images. We show that variance images are most sensitive to pulsars whose scintillation time-scales and bandwidths are close to the subintegration time and channel bandwidth. Therefore, in order to maximise the detection of pulsars for a given radio continuum survey, it is essential to retain a high time and frequency resolution, allowing us to make variance images sensitive to pulsars with different scintillation properties. We demonstrate the technique with Murchision Widefield Array data and show that variance images can indeed lead to the detection of pulsars by distinguishing them from other radio sources.
Application of variance components estimation to calibrate geoid error models.
Guo, Dong-Mei; Xu, Hou-Ze
2015-01-01
The method of using Global Positioning System-leveling data to obtain orthometric heights has been well studied. A simple formulation for the weighted least squares problem has been presented in an earlier work. This formulation allows one directly employing the errors-in-variables models which completely descript the covariance matrices of the observables. However, an important question that what accuracy level can be achieved has not yet to be satisfactorily solved by this traditional formulation. One of the main reasons for this is the incorrectness of the stochastic models in the adjustment, which in turn allows improving the stochastic models of measurement noises. Therefore the issue of determining the stochastic modeling of observables in the combined adjustment with heterogeneous height types will be a main focus point in this paper. Firstly, the well-known method of variance component estimation is employed to calibrate the errors of heterogeneous height data in a combined least square adjustment of ellipsoidal, orthometric and gravimetric geoid. Specifically, the iterative algorithms of minimum norm quadratic unbiased estimation are used to estimate the variance components for each of heterogeneous observations. Secondly, two different statistical models are presented to illustrate the theory. The first method directly uses the errors-in-variables as a priori covariance matrices and the second method analyzes the biases of variance components and then proposes bias-corrected variance component estimators. Several numerical test results show the capability and effectiveness of the variance components estimation procedure in combined adjustment for calibrating geoid error model. PMID:26306296
Increased spatial variance accompanies reorganization of two continental shelf ecosystems.
Litzow, Michael A; Urban, J Daniel; Laurel, Benjamin J
2008-09-01
Phase transitions between alternate stable states in marine ecosystems lead to disruptive changes in ecosystem services, especially fisheries productivity. We used trawl survey data spanning phase transitions in the North Pacific (Gulf of Alaska) and the North Atlantic (Scotian Shelf) to test for increases in ecosystem variability that might provide early warning of such transitions. In both time series, elevated spatial variability in a measure of community composition (ratio of cod [Gadus sp.] abundance to prey abundance) accompanied transitions between ecosystem states, and variability was negatively correlated with distance from the ecosystem transition point. In the Gulf of Alaska, where the phase transition was apparently the result of a sudden perturbation (climate regime shift), variance increased one year before the transition in mean state occurred. On the Scotian Shelf, where ecosystem reorganization was the result of persistent overfishing, a significant increase in variance occurred three years before the transition in mean state was detected. However, we could not reject the alternate explanation that increased variance may also have simply been inherent to the final stable state in that ecosystem. Increased variance has been previously observed around transition points in models, but rarely in real ecosystems, and our results demonstrate the possible management value in tracking the variance of key parameters in exploited ecosystems. PMID:18767612
Practice reduces task relevant variance modulation and forms nominal trajectory
Osu, Rieko; Morishige, Ken-ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo
2015-01-01
Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary. PMID:26639942
Analysis of Variance Components for Genetic Markers with Unphased Genotypes
Wang, Tao
2016-01-01
An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions. PMID:27468297
Technology Transfer Automated Retrieval System (TEKTRAN)
Breeders select superior genotypes despite the environment affecting phenotypic variance. Minimal variance of genotype means facilitates the statistical identification of superior genotypes. The variance components calculated from three datasets describing tuber composition and fried chip color were...
Age-specific patterns of genetic variance in Drosophila melanogaster. I. Mortality
Promislow, D.E.L.; Tatar, M.; Curtsinger, J.W.
1996-06-01
Peter Medawar proposed that senescence arises from an age-related decline in the force of selection, which allows late-acting deleterious mutations to accumulate. Subsequent workers have suggested that mutation accumulation could produce an age-related increase in additive genetic variance (V{sub A}) for fitness traits, as recently found in Drosophila melanogaster. Here we report results from a genetic analysis of mortality in 65,134 D. melanogaster. Additive genetic variance for female mortality rates increases from 0.007 in the first week of life to 0.325 by the third week, and then declines to 0.002 by the seventh week. Males show a similar pattern, though total variance is lower than in females. In contrast to a predicted divergence in mortality curves, mortality curves of different genotypes are roughly parallel. Using a three-parameter model, we find significant V{sub A} for the slope and constant term of the curve describing age-specific mortality rates, and also for the rate at which mortality decelerates late in life. These results fail to support a prediction derived from Medawar`s {open_quotes}mutation accumulation{close_quotes} theory for the evolution of senescence. However, our results could be consistent with alternative interpretations of evolutionary models of aging. 65 refs., 2 figs., 2 tabs.
The probabilities of unique events.
Khemlani, Sangeet S; Lotstein, Max; Johnson-Laird, Phil
2012-01-01
Many theorists argue that the probabilities of unique events, even real possibilities such as President Obama's re-election, are meaningless. As a consequence, psychologists have seldom investigated them. We propose a new theory (implemented in a computer program) in which such estimates depend on an intuitive non-numerical system capable only of simple procedures, and a deliberative system that maps intuitions into numbers. The theory predicts that estimates of the probabilities of conjunctions should often tend to split the difference between the probabilities of the two conjuncts. We report two experiments showing that individuals commit such violations of the probability calculus, and corroborating other predictions of the theory, e.g., individuals err in the same way even when they make non-numerical verbal estimates, such as that an event is highly improbable. PMID:23056224
The Probabilities of Unique Events
Khemlani, Sangeet S.; Lotstein, Max; Johnson-Laird, Phil
2012-01-01
Many theorists argue that the probabilities of unique events, even real possibilities such as President Obama's re-election, are meaningless. As a consequence, psychologists have seldom investigated them. We propose a new theory (implemented in a computer program) in which such estimates depend on an intuitive non-numerical system capable only of simple procedures, and a deliberative system that maps intuitions into numbers. The theory predicts that estimates of the probabilities of conjunctions should often tend to split the difference between the probabilities of the two conjuncts. We report two experiments showing that individuals commit such violations of the probability calculus, and corroborating other predictions of the theory, e.g., individuals err in the same way even when they make non-numerical verbal estimates, such as that an event is highly improbable. PMID:23056224
Saturation of number variance in embedded random-matrix ensembles
NASA Astrophysics Data System (ADS)
Prakash, Ravi; Pandey, Akhilesh
2016-05-01
We study fluctuation properties of embedded random matrix ensembles of noninteracting particles. For ensemble of two noninteracting particle systems, we find that unlike the spectra of classical random matrices, correlation functions are nonstationary. In the locally stationary region of spectra, we study the number variance and the spacing distributions. The spacing distributions follow the Poisson statistics, which is a key behavior of uncorrelated spectra. The number variance varies linearly as in the Poisson case for short correlation lengths but a kind of regularization occurs for large correlation lengths, and the number variance approaches saturation values. These results are known in the study of integrable systems but are being demonstrated for the first time in random matrix theory. We conjecture that the interacting particle cases, which exhibit the characteristics of classical random matrices for short correlation lengths, will also show saturation effects for large correlation lengths.
Monte Carlo variance reduction approaches for non-Boltzmann tallies
Booth, T.E.
1992-12-01
Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed.
Impact of Damping Uncertainty on SEA Model Response Variance
NASA Technical Reports Server (NTRS)
Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand
2010-01-01
Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.
Minimum variance lower bound estimation and realization for desired structures.
Alipouri, Yousef; Poshtan, Javad
2014-05-01
The Minimum Variance Lower Bound (MVLB) represents the best achievable controller capability in a variance sense. Estimation and realization of MVLB for nonlinear systems confront some difficulties. Hence, almost all methods introduced so far estimate MVLB for a certain structure (e.g., NARMAX) or controller (e.g. PID). In this paper, MVLB for desired structures (not restricted to a certain type) is studied. The situation when the model is not in hand, is not accurate, or is not invertible has been considered. Moreover, in order to realize minimum variance controllers for nonlinear structures, a recursive model-free MVC design is utilized. Finally, a simulation study has been used to clarify the effectiveness of the proposed control scheme. PMID:24642244
The mean and variance of phylogenetic diversity under rarefaction.
Nipperess, David A; Matsen, Frederick A
2013-06-01
Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required. PMID:23833701
Enhancing area of review capabilities: Implementing a variance program
De Leon, F.
1995-12-01
The Railroad Commission of Texas (RRC) has regulated oil-field injection well operations since issuing its first injection permit in 1938. The Environmental Protection Agency (EPA) granted the RRC primary enforcement responsibility for the Class H Underground Injection Control (UIC) Program in April 1982. At that time, the added level of groundwater protection afforded by an Area of Review (AOR) on previously permitted Class H wells was not deemed necessary or cost effective. A proposed EPA rule change will require AORs to be performed on all pre-primacy Class II wells unless a variance can be justified. A variance methodology has been developed by researchers at the University of Missouri-Rolla in conjunction with the American Petroleum Institute (API). This paper will outline the RRC approach to implementing the AOR variance methodology. The RRC`s UIC program tracks 49,256 pre-primacy wells. Approximately 25,598 of these wells have active permits and will be subject to the proposed AOR requirements. The potential workload of performing AORs or granting variances for this many wells makes the development of a Geographic Information System (GIS) imperative. The RRC has recently completed a digitized map of the entire state and has spotted 890,000 of an estimated 1.2 million wells. Integrating this digital state map into a GIS will allow the RRC to tie its many data systems together. Once in place, this integrated data system will be used to evaluate AOR variances for pre-primacy wells on a field-wide basis. It will also reduce the regulatory cost of permitting by allowing the RRC staff to perform AORs or grant variances for the approximately 3,000 new and amended permit applications requiring AORs each year.
Quantitative Genetic Analysis of Temperature Regulation in MUS MUSCULUS. I. Partitioning of Variance
Lacy, Robert C.; Lynch, Carol Becker
1979-01-01
Heritabilities (from parent-offspring regression) and intraclass correlations of full sibs for a variety of traits were estimated from 225 litters of a heterogeneous stock (HS/Ibg) of laboratory mice. Initial variance partitioning suggested different adaptive functions for physiological, morphological and behavioral adjustments with respect to their thermoregulatory significance. Metabolic heat-production mechanisms appear to have reached their genetic limits, with little additive genetic variance remaining. This study provided no genetic evidence that body size has a close directional association with fitness in cold environments, since heritability estimates for weight gain and adult weight were similar and high, whether or not the animals were exposed to cold. Behavioral heat conservation mechanisms also displayed considerable amounts of genetic variability. However, due to strong evidence from numerous other studies that behavior serves an important adaptive role for temperature regulation in small mammals, we suggest that fluctuating selection pressures may have acted to maintain heritable variation in these traits. PMID:17248909
ERIC Educational Resources Information Center
Starns, Jeffrey J.; Rotello, Caren M.; Hautus, Michael J.
2014-01-01
We tested the dual process and unequal variance signal detection models by jointly modeling recognition and source confidence ratings. The 2 approaches make unique predictions for the slope of the recognition memory zROC function for items with correct versus incorrect source decisions. The standard bivariate Gaussian version of the unequal…
dos Reis, Matheus Costa; Pádua, José Maria Villela; Abreu, Guilherme Barbosa; Guedes, Fernando Lisboa; Balbi, Rodrigo Vieira; de Souza, João Cândido
2014-01-01
This study was carried out to obtain the estimates of genetic variance and covariance components related to intra- and interpopulation in the original populations (C0) and in the third cycle (C3) of reciprocal recurrent selection (RRS) which allows breeders to define the best breeding strategy. For that purpose, the half-sib progenies of intrapopulation (P11 and P22) and interpopulation (P12 and P21) from populations 1 and 2 derived from single-cross hybrids in the 0 and 3 cycles of the reciprocal recurrent selection program were used. The intra- and interpopulation progenies were evaluated in a 10 × 10 triple lattice design in two separate locations. The data for unhusked ear weight (ear weight without husk) and plant height were collected. All genetic variance and covariance components were estimated from the expected mean squares. The breakdown of additive variance into intrapopulation and interpopulation additive deviations (στ2) and the covariance between these and their intrapopulation additive effects (CovAτ) found predominance of the dominance effect for unhusked ear weight. Plant height for these components shows that the intrapopulation additive effect explains most of the variation. Estimates for intrapopulation and interpopulation additive genetic variances confirm that populations derived from single-cross hybrids have potential for recurrent selection programs. PMID:25009831
The dynamic Allan Variance IV: characterization of atomic clock anomalies.
Galleani, Lorenzo; Tavella, Patrizia
2015-05-01
The number of applications where precise clocks play a key role is steadily increasing, satellite navigation being the main example. Precise clock anomalies are hence critical events, and their characterization is a fundamental problem. When an anomaly occurs, the clock stability changes with time, and this variation can be characterized with the dynamic Allan variance (DAVAR). We obtain the DAVAR for a series of common clock anomalies, namely, a sinusoidal term, a phase jump, a frequency jump, and a sudden change in the clock noise variance. These anomalies are particularly common in space clocks. Our analytic results clarify how the clock stability changes during these anomalies. PMID:25965674
Variance in trace constituents following the final stratospheric warming
NASA Technical Reports Server (NTRS)
Hess, Peter
1990-01-01
Concentration variations with time in trace stratospheric constituents N2O, CF2Cl2, CFCl3, and CH4 were investigated using samples collected aboard balloons flown over southern France during the summer months of 1977-1979. Data are analyzed using a tracer transport model, and the mechanisms behind the modeled tracer variance are examined. An analysis of the N2O profiles for the month of June showed that a large fraction of the variance reported by Ehhalt et al. (1983) is on an interannual time scale.
A multi-variance analysis in the time domain
NASA Technical Reports Server (NTRS)
Walter, Todd
1993-01-01
Recently a new technique for characterizing the noise processes affecting oscillators was introduced. This technique minimizes the difference between the estimates of several different variances and their values as predicted by the standard power law model of noise. The method outlined makes two significant advancements: it uses exclusively time domain variances so that deterministic parameters such as linear frequency drift may be estimated, and it correctly fits the estimates using the chi-square distribution. These changes permit a more accurate fitting at long time intervals where there is the least information. This technique was applied to both simulated and real data with excellent results.
Signal Variance in Gamma Ray Detectors - A Review
Devanathan, Ram; Corrales, Louis R.; Gao, Fei; Weber, William J.
2006-09-06
Signal variance in gamma ray detector materials is reviewed with an emphasis on intrinsic variance. Phenomenological models of electron cascades are examined and the Fano factor (F) is discussed in detail. In semiconductors F is much smaller than unity and charge carrier production is nearly proportional to energy. Based on a fit to a number of semiconductors and insulators, a new relationship between the average energy for electron-hole pair production and band-gap energy is proposed. In scintillators, the resolution is governed mainly by photoelectron statistics and proportionality of light yield with respect to energy.
Individualized additional instruction for calculus
NASA Astrophysics Data System (ADS)
Takata, Ken
2010-10-01
College students enrolling in the calculus sequence have a wide variance in their preparation and abilities, yet they are usually taught from the same lecture. We describe another pedagogical model of Individualized Additional Instruction (IAI) that assesses each student frequently and prescribes further instruction and homework based on the student's performance. Our study compares two calculus classes, one taught with mandatory remedial IAI and the other without. The class with mandatory remedial IAI did significantly better on comprehensive multiple-choice exams, participated more frequently in classroom discussion and showed greater interest in theorem-proving and other advanced topics.
Estimation of variance components including competitive effects of Large White growing gilts.
Arango, J; Misztal, I; Tsuruta, S; Culbertson, M; Herring, W
2005-06-01
Records of on-test ADG of Large White gilts were analyzed to estimate variance components of direct and associative genetic effects. Models included the effects of contemporary group (farm-barn-batch), birth litter, pen group, and direct and associative additive genetic effects. The area of each pen was 14 m2. The additive genetic variance was a function of the number of competitors in a group, the additive relationships between the animal performing the record and its pen mates, and the additive relationships between pen mates. To partially account for differences in the number of pen mates, a covariable (qi = 1, 1/n, or 1/n(1/2)) was added to the associative genetic effect. There were 4,946 records from 2,409 litters and 362 pen groups. Pen group size ranged from 12 to 16 gilts. Analyses by REML converged very slowly. A grid search showed that the likelihood function was almost flat when the additive genetic associative effect was fitted. Estimates of direct and associative heritability were 0.15 and 0.03, respectively. Within the BLUPF90 family of programs, the mixed-model equations can be set up directly. For variance component estimation, simple programs (REMLF90 and GIBBSF90) worked without modifications, but more optimized programs did not. Estimates obtained using the three values of qi were similar. With the data structure available for this study and under an environment with relative low competition among animals, accurate estimation of associative genetic effects was not possible. Estimation of competitive effects with large pen size is difficult. The magnitude of competition effects may be larger in commercial populations, where housing is denser and food is limited. PMID:15890801
NASA Technical Reports Server (NTRS)
Harder, R. L.
1974-01-01
The NASTRAN Thermal Analyzer has been intended to do variance analysis and plot the thermal boundary elements. The objective of the variance analysis addition is to assess the sensitivity of temperature variances resulting from uncertainties inherent in input parameters for heat conduction analysis. The plotting capability provides the ability to check the geometry (location, size and orientation) of the boundary elements of a model in relation to the conduction elements. Variance analysis is the study of uncertainties of the computed results as a function of uncertainties of the input data. To study this problem using NASTRAN, a solution is made for both the expected values of all inputs, plus another solution for each uncertain variable. A variance analysis module subtracts the results to form derivatives, and then can determine the expected deviations of output quantities.
Genetic Variance in the SES-IQ Correlation.
ERIC Educational Resources Information Center
Eckland, Bruce K.
1979-01-01
Discusses questions dealing with genetic aspects of the correlation between IQ and socioeconomic status (SES). Questions include: How does assortative mating affect the genetic variance of IQ? Is the relationship between an individual's IQ and adult SES a causal one? And how can IQ research improve schools and schooling? (Author/DB)
Comparison of Turbulent Thermal Diffusivity and Scalar Variance Models
NASA Technical Reports Server (NTRS)
Yoder, Dennis A.
2016-01-01
In this study, several variable turbulent Prandtl number formulations are examined for boundary layers, pipe flow, and axisymmetric jets. The model formulations include simple algebraic relations between the thermal diffusivity and turbulent viscosity as well as more complex models that solve transport equations for the thermal variance and its dissipation rate. Results are compared with available data for wall heat transfer and profile measurements of mean temperature, the root-mean-square (RMS) fluctuating temperature, turbulent heat flux and turbulent Prandtl number. For wall-bounded problems, the algebraic models are found to best predict the rise in turbulent Prandtl number near the wall as well as the log-layer temperature profile, while the thermal variance models provide a good representation of the RMS temperature fluctuations. In jet flows, the algebraic models provide no benefit over a constant turbulent Prandtl number approach. Application of the thermal variance models finds that some significantly overpredict the temperature variance in the plume and most underpredict the thermal growth rate of the jet. The models yield very similar fluctuating temperature intensities in jets from straight pipes and smooth contraction nozzles, in contrast to data that indicate the latter should have noticeably higher values. For the particular low subsonic heated jet cases examined, changes in the turbulent Prandtl number had no effect on the centerline velocity decay.
Module organization and variance in protein-protein interaction networks
Lin, Chun-Yu; Lee, Tsai-Ling; Chiu, Yi-Yuan; Lin, Yi-Wei; Lo, Yu-Shu; Lin, Chih-Ta; Yang, Jinn-Moon
2015-01-01
A module is a group of closely related proteins that act in concert to perform specific biological functions through protein–protein interactions (PPIs) that occur in time and space. However, the underlying module organization and variance remain unclear. In this study, we collected module templates to infer respective module families, including 58,041 homologous modules in 1,678 species, and PPI families using searches of complete genomic database. We then derived PPI evolution scores and interface evolution scores to describe the module elements, including core and ring components. Functions of core components were highly correlated with those of essential genes. In comparison with ring components, core proteins/PPIs were conserved across multiple species. Subsequently, protein/module variance of PPI networks confirmed that core components form dynamic network hubs and play key roles in various biological functions. Based on the analyses of gene essentiality, module variance, and gene co-expression, we summarize the observations of module organization and variance as follows: 1) a module consists of core and ring components; 2) core components perform major biological functions and collaborate with ring components to execute certain functions in some cases; 3) core components are more conserved and essential during organizational changes in different biological states or conditions. PMID:25797237
Explaining Common Variance Shared by Early Numeracy and Literacy
ERIC Educational Resources Information Center
Davidse, N. J.; De Jong, M. T.; Bus, A. G.
2014-01-01
How can it be explained that early literacy and numeracy share variance? We specifically tested whether the correlation between four early literacy skills (rhyming, letter knowledge, emergent writing, and orthographic knowledge) and simple sums (non-symbolic and story condition) reduced after taking into account preschool attention control,…
Intuitive Analysis of Variance-- A Formative Assessment Approach
ERIC Educational Resources Information Center
Trumpower, David
2013-01-01
This article describes an assessment activity that can show students how much they intuitively understand about statistics, but also alert them to common misunderstandings. How the activity can be used formatively to help improve students' conceptual understanding of analysis of variance is discussed. (Contains 1 figure and 1 table.)
Infinite variance in fermion quantum Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.
Unbiased Estimates of Variance Components with Bootstrap Procedures
ERIC Educational Resources Information Center
Brennan, Robert L.
2007-01-01
This article provides general procedures for obtaining unbiased estimates of variance components for any random-model balanced design under any bootstrap sampling plan, with the focus on designs of the type typically used in generalizability theory. The results reported here are particularly helpful when the bootstrap is used to estimate standard…
Caution on the Use of Variance Ratios: A Comment.
ERIC Educational Resources Information Center
Shaffer, Juliet Popper
1992-01-01
Several metanalytic studies of group variability use variance ratios as measures of effect size. Problems with this approach are discussed, including limitations of using means and medians of ratios. Mean logarithms and the geometric mean are not adversely affected by the arbitrary choice of numerator. (SLD)
Variance-based uncertainty relations for incompatible observables
NASA Astrophysics Data System (ADS)
Chen, Bin; Cao, Ning-Ping; Fei, Shao-Ming; Long, Gui-Lu
2016-06-01
We formulate uncertainty relations for arbitrary finite number of incompatible observables. Based on the sum of variances of the observables, both Heisenberg-type and Schrödinger-type uncertainty relations are provided. These new lower bounds are stronger in most of the cases than the ones derived from some existing inequalities. Detailed examples are presented.
40 CFR 190.11 - Variances for unusual operations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Variances for unusual operations. 190.11 Section 190.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS...
Strength of Relationship in Multivariate Analysis of Variance.
ERIC Educational Resources Information Center
Smith, I. Leon
Methods for the calculation of eta coefficient, or correlation ratio, squared have recently been presented for examining the strength of relationship in univariate analysis of variance. This paper extends them to the multivariate case in which the effects of independent variables may be examined in relation to two or more dependent variables, and…
29 CFR 1904.38 - Variances from the recordkeeping rule.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 29 Labor 5 2011-07-01 2011-07-01 false Variances from the recordkeeping rule. 1904.38 Section 1904.38 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Other OSHA Injury and...
29 CFR 1904.38 - Variances from the recordkeeping rule.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 29 Labor 5 2013-07-01 2013-07-01 false Variances from the recordkeeping rule. 1904.38 Section 1904.38 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Other OSHA Injury and...
29 CFR 1904.38 - Variances from the recordkeeping rule.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 29 Labor 5 2014-07-01 2014-07-01 false Variances from the recordkeeping rule. 1904.38 Section 1904.38 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Other OSHA Injury and...
29 CFR 1904.38 - Variances from the recordkeeping rule.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 29 Labor 5 2012-07-01 2012-07-01 false Variances from the recordkeeping rule. 1904.38 Section 1904.38 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Other OSHA Injury and...
The Variance of Intraclass Correlations in Three and Four Level
ERIC Educational Resources Information Center
Hedges, Larry V.; Hedberg, Eric C.; Kuyper, Arend M.
2012-01-01
Intraclass correlations are used to summarize the variance decomposition in popula- tions with multilevel hierarchical structure. There has recently been considerable interest in estimating intraclass correlations from surveys or designed experiments to provide design parameters for planning future large-scale randomized experiments. The large…
Analysis of Variance: What Is Your Statistical Software Actually Doing?
ERIC Educational Resources Information Center
Li, Jian; Lomax, Richard G.
2011-01-01
Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…
44 CFR 60.6 - Variances and exceptions.
Code of Federal Regulations, 2012 CFR
2012-10-01
... environmental document will be prepared, will be made in accordance with the procedures set out in 44 CFR part... HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program CRITERIA FOR LAND MANAGEMENT AND USE Requirements for Flood Plain Management Regulations § 60.6 Variances and exceptions....
76 FR 78698 - Proposed Revocation of Permanent Variances
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-19
... several conditions that served as an alternative means of compliance to the falling-object-protection and... specified by these variances. Therefore, OSHA believes the alternative means of compliance granted by the.... 651, 655) in 1971 (see 36 FR 7340). Paragraphs (a)(4) and (a)(5) of Sec. 1926.451 required...
Numbers Of Degrees Of Freedom Of Allan-Variance Estimators
NASA Technical Reports Server (NTRS)
Greenhall, Charles A.
1992-01-01
Report discusses formulas for estimation of Allan variances. Presents algorithms for closed-form approximations of numbers of degrees of freedom characterizing results obtained when various estimators applied to five power-law components of classical mathematical model of clock noise.
Partitioning the Variance in Scores on Classroom Environment Instruments
ERIC Educational Resources Information Center
Dorman, Jeffrey P.
2009-01-01
This paper reports the partitioning of variance in scale scores from the use of three classroom environment instruments. Data sets from the administration of the What Is Happening In this Class (WIHIC) to 4,146 students, the Questionnaire on Teacher Interaction (QTI) to 2,167 students and the Catholic School Classroom Environment Questionnaire…
21 CFR 821.2 - Exemptions and variances.
Code of Federal Regulations, 2014 CFR
2014-04-01
... and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES MEDICAL DEVICE TRACKING REQUIREMENTS General Provisions § 821.2 Exemptions and variances. (a) A...(s) of the device; (2) The reasons that compliance with the tracking requirements of this part...
21 CFR 821.2 - Exemptions and variances.
Code of Federal Regulations, 2013 CFR
2013-04-01
... and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES MEDICAL DEVICE TRACKING REQUIREMENTS General Provisions § 821.2 Exemptions and variances. (a) A...(s) of the device; (2) The reasons that compliance with the tracking requirements of this part...
21 CFR 821.2 - Exemptions and variances.
Code of Federal Regulations, 2010 CFR
2010-04-01
... and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES MEDICAL DEVICE TRACKING REQUIREMENTS General Provisions § 821.2 Exemptions and variances. (a) A...(s) of the device; (2) The reasons that compliance with the tracking requirements of this part...
21 CFR 821.2 - Exemptions and variances.
Code of Federal Regulations, 2012 CFR
2012-04-01
... and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES MEDICAL DEVICE TRACKING REQUIREMENTS General Provisions § 821.2 Exemptions and variances. (a) A...(s) of the device; (2) The reasons that compliance with the tracking requirements of this part...
40 CFR 142.42 - Consideration of a variance request.
Code of Federal Regulations, 2011 CFR
2011-07-01
... water source, the Administrator shall consider such factors as the following: (1) The availability and... economic considerations such as implementing treatment, improving the quality of the source water or using an alternate source. (c) A variance may be issued to a public water system on the condition that...
How does variance in fertility change over the demographic transition?
Hruschka, Daniel J; Burger, Oskar
2016-04-19
Most work on the human fertility transition has focused on declines in mean fertility. However, understanding changes in thevarianceof reproductive outcomes can be equally important for evolutionary questions about the heritability of fertility, individual determinants of fertility and changing patterns of reproductive skew. Here, we document how variance in completed fertility among women (45-49 years) differs across 200 surveys in 72 low- to middle-income countries where fertility transitions are currently in progress at various stages. Nearly all (91%) of samples exhibit variance consistent with a Poisson process of fertility, which places systematic, and often severe, theoretical upper bounds on the proportion of variance that can be attributed to individual differences. In contrast to the pattern of total variance, these upper bounds increase from high- to mid-fertility samples, then decline again as samples move from mid to low fertility. Notably, the lowest fertility samples often deviate from a Poisson process. This suggests that as populations move to low fertility their reproduction shifts from a rate-based process to a focus on an ideal number of children. We discuss the implications of these findings for predicting completed fertility from individual-level variables. PMID:27022082
Dominance, Information, and Hierarchical Scaling of Variance Space.
ERIC Educational Resources Information Center
Ceurvorst, Robert W.; Krus, David J.
1979-01-01
A method for computation of dominance relations and for construction of their corresponding hierarchical structures is presented. The link between dominance and variance allows integration of the mathematical theory of information with least squares statistical procedures without recourse to logarithmic transformations of the data. (Author/CTM)
Temporal Relation Extraction in Outcome Variances of Clinical Pathways.
Yamashita, Takanori; Wakata, Yoshifumi; Hamai, Satoshi; Nakashima, Yasuharu; Iwamoto, Yukihide; Franagan, Brendan; Nakashima, Naoki; Hirokawa, Sachio
2015-01-01
Recently the clinical pathway has progressed with digitalization and the analysis of activity. There are many previous studies on the clinical pathway but not many feed directly into medical practice. We constructed a mind map system that applies the spanning tree. This system can visualize temporal relations in outcome variances, and indicate outcomes that affect long-term hospitalization. PMID:26262376
[ECoG classification based on wavelet variance].
Yan, Shiyu; Liu, Chong; Wang, Hong; Zhao, Haibin
2013-06-01
For a typical electrocorticogram (ECoG)-based brain-computer interface (BCI) system in which the subject's task is to imagine movements of either the left small finger or the tongue, we proposed a feature extraction algorithm using wavelet variance. Firstly the definition and significance of wavelet variance were brought out and taken as feature based on the discussion of wavelet transform. Six channels with most distinctive features were selected from 64 channels for analysis. Consequently the EEG data were decomposed using db4 wavelet. The wavelet coeffi-cient variances containing Mu rhythm and Beta rhythm were taken out as features based on ERD/ERS phenomenon. The features were classified linearly with an algorithm of cross validation. The results of off-line analysis showed that high classification accuracies of 90. 24% and 93. 77% for training and test data set were achieved, the wavelet vari-ance had characteristics of simplicity and effectiveness and it was suitable for feature extraction in BCI research. K PMID:23865300
Challenges and opportunities in variance component estimation for animal breeding
Technology Transfer Automated Retrieval System (TEKTRAN)
There have been many advances in variance component estimation (VCE), both in theory and in software, since Dr. Henderson introduced Henderson’s Methods 1, 2, and 3 in 1953. However, many challenges in modern animal breeding are not addressed adequately by current algorithms and software. Examples i...
Variance in Math Achievement Attributable to Visual Cognitive Constructs
ERIC Educational Resources Information Center
Oehlert, Jeremy J.
2012-01-01
Previous research has reported positive correlations between math achievement and the cognitive constructs of spatial visualization, working memory, and general intelligence; however, no single study has assessed variance in math achievement attributable to all three constructs, examined in combination. The current study fills this gap in the…
Astronomy Outreach for Large and Unique Audiences
NASA Astrophysics Data System (ADS)
Lubowich, D.; Sparks, R. T.; Pompea, S. M.; Kendall, J. S.; Dugan, C.
2013-04-01
In this session, we discuss different approaches to reaching large audiences. In addition to star parties and astronomy events, the audiences for some of the events include music concerts or festivals, sick children and their families, minority communities, American Indian reservations, and tourist sites such as the National Mall. The goal is to bring science directly to the public—to people who attend astronomy events and to people who do not come to star parties, science museums, or science festivals. These programs allow the entire community to participate in astronomy activities to enhance the public appreciation of science. These programs attract large enthusiastic crowds often with young children participating in these family learning experiences. The public will become more informed, educated, and inspired about astronomy and will also be provided with information that will allow them to continue to learn after this outreach activity. Large and unique audiences often have common problems, and their solutions and the lessons learned will be presented. Interaction with the participants in this session will provide important community feedback used to improve astronomy outreach for large and unique audiences. New ways to expand astronomy outreach to new large audiences will be discussed.
Shenk, T.M.; White, Gary C.; Burnham, K.P.
1998-01-01
Monte Carlo simulations were conducted to evaluate robustness of four tests to detect density dependence, from series of population abundances, to the addition of sampling variance. Population abundances were generated from random walk, stochastic exponential growth, and density-dependent population models. Population abundance estimates were generated with sampling variances distributed as lognormal and constant coefficients of variation (cv) from 0.00 to 1.00. In general, when data were generated under a random walk, Type I error rates increased rapidly for Bulmer's R, Pollard et al.'s, and Dennis and Taper's tests with increasing magnitude of sampling variance for n > 5 yr and all values of process variation. Bulmer's R* test maintained a constant 5% Type I error rate for n > 5 yr and all magnitudes of sampling variance in the population abundance estimates. When abundances were generated from two stochastic exponential growth models (R = 0.05 and R = 0.10), Type I errors again increased with increasing sampling variance; magnitude of Type I error rates were higher for the slower growing population. Therefore, sampling error inflated Type I error rates, invalidating the tests, for all except Bulmer's R* test. Comparable simulations for abundance estimates generated from a density-dependent growth rate model were conducted to estimate power of the tests. Type II error rates were influenced by the relationship of initial population size to carrying capacity (K), length of time series, as well as sampling error. Given the inflated Type I error rates for all but Bulmer, s R*, power was overestimated for the remaining tests, resulting in density: dependence being detected more often than it existed. Population abundances of natural populations are almost exclusively estimated rather than censused, assuring sampling error. Therefore, because these tests have been shown to be either invalid when only sampling variance occurs in the population abundances (Bulmer's R
Variance in the reproductive success of dominant male mountain gorillas.
Robbins, Andrew M; Gray, Maryke; Uwingeli, Prosper; Mburanumwe, Innocent; Kagoda, Edwin; Robbins, Martha M
2014-10-01
Using 30 years of demographic data from 15 groups, this study estimates how harem size, female fertility, and offspring survival may contribute to variance in the siring rates of dominant male mountain gorillas throughout the Virunga Volcano Region. As predicted for polygynous species, differences in harem size were the greatest source of variance in the siring rate, whereas differences in female fertility and offspring survival were relatively minor. Harem size was positively correlated with offspring survival, even after removing all known and suspected cases of infanticide, so the correlation does not seem to reflect differences in the ability of males to protect their offspring. Harem size was not significantly correlated with female fertility, which is consistent with the hypothesis that mountain gorillas have minimal feeding competition. Harem size, offspring survival, and siring rates were not significantly correlated with the proportion of dominant tenures that occurred in multimale groups versus one-male groups; even though infanticide is less likely when those tenures end in multimale groups than one-male groups. In contrast with the relatively small contribution of offspring survival to variance in the siring rates of this study, offspring survival is a major source of variance in the male reproductive success of western gorillas, which have greater predation risks and significantly higher rates of infanticide. If differences in offspring protection are less important among male mountain gorillas than western gorillas, then the relative importance of other factors may be greater for mountain gorillas. Thus, our study illustrates how variance in male reproductive success and its components can differ between closely related species. PMID:24818867
Gravity Wave Variances and Propagation Derived from AIRS Radiances
NASA Technical Reports Server (NTRS)
Gong, Jie; Wu, Dong L.; Eckermann, S. D.
2012-01-01
As the first gravity wave (GW) climatology study using nadir-viewing infrared sounders, 50 Atmospheric Infrared Sounder (AIRS) radiance channels are selected to estimate GW variances at pressure levels between 2-100 hPa. The GW variance for each scan in the cross-track direction is derived from radiance perturbations in the scan, independently of adjacent scans along the orbit. Since the scanning swaths are perpendicular to the satellite orbits, which are inclined meridionally at most latitudes, the zonal component of GW propagation can be inferred by differencing the variances derived between the westmost and the eastmost viewing angles. Consistent with previous GW studies using various satellite instruments, monthly mean AIRS variance shows large enhancements over meridionally oriented mountain ranges as well as some islands at winter hemisphere high latitudes. Enhanced wave activities are also found above tropical deep convective regions. GWs prefer to propagate westward above mountain ranges, and eastward above deep convection. AIRS 90 field-of-views (FOVs), ranging from +48 deg. to -48 deg. off nadir, can detect large-amplitude GWs with a phase velocity propagating preferentially at steep angles (e.g., those from orographic and convective sources). The annual cycle dominates the GW variances and the preferred propagation directions for all latitudes. Indication of a weak two-year variation in the tropics is found, which is presumably related to the Quasi-biennial oscillation (QBO). AIRS geometry makes its out-tracks capable of detecting GWs with vertical wavelengths substantially shorter than the thickness of instrument weighting functions. The novel discovery of AIRS capability of observing shallow inertia GWs will expand the potential of satellite GW remote sensing and provide further constraints on the GW drag parameterization schemes in the general circulation models (GCMs).
White matter morphometric changes uniquely predict children's reading acquisition.
Myers, Chelsea A; Vandermosten, Maaike; Farris, Emily A; Hancock, Roeland; Gimenez, Paul; Black, Jessica M; Casto, Brandi; Drahos, Miroslav; Tumber, Mandeep; Hendren, Robert L; Hulme, Charles; Hoeft, Fumiko
2014-10-01
This study examined whether variations in brain development between kindergarten and Grade 3 predicted individual differences in reading ability at Grade 3. Structural MRI measurements indicated that increases in the volume of two left temporo-parietal white matter clusters are unique predictors of reading outcomes above and beyond family history, socioeconomic status, and cognitive and preliteracy measures at baseline. Using diffusion MRI, we identified the left arcuate fasciculus and superior corona radiata as key fibers within the two clusters. Bias-free regression analyses using regions of interest from prior literature revealed that volume changes in temporo-parietal white matter, together with preliteracy measures, predicted 56% of the variance in reading outcomes. Our findings demonstrate the important contribution of developmental differences in areas of left dorsal white matter, often implicated in phonological processing, as a sensitive early biomarker for later reading abilities, and by extension, reading difficulties. PMID:25212581
A unique element resembling a processed pseudogene.
Robins, A J; Wang, S W; Smith, T F; Wells, J R
1986-01-01
We describe a unique DNA element with structural features of a processed pseudogene but with important differences. It is located within an 8.4-kilobase pair region of chicken DNA containing five histone genes, but it is not related to these genes. The presence of terminal repeats, an open reading frame (and stop codon), polyadenylation/processing signal, and a poly(A) rich region about 20 bases 3' to this, together with a lack of 5' promoter motifs all suggest a processed pseudogene. However, no parent gene can be detected in the genome by Southern blotting experiments and, in addition, codon boundary values and mid-base correlations are not consistent with a protein coding region of a eukaryotic gene. The element was detected in DNA from different chickens and in peafowl, but not in quail, pheasant, or turkey. PMID:3941070
NASA Astrophysics Data System (ADS)
Bielewicz, P.; Wandelt, B. D.; Banday, A. J.
2013-02-01
We present a method for the computation of the variance of cosmic microwave background (CMB) temperature maps on azimuthally symmetric patches using a fast convolution approach. As an example of the application of the method, we show results for the search for concentric rings with unusual variance in the 7-year Wilkinson Microwave Anisotropy Probe (WMAP) data. We re-analyse claims concerning the unusual variance profile of rings centred at two locations on the sky that have recently drawn special attention in the context of the conformal cyclic cosmology scenario proposed by Penrose. We extend this analysis to rings with larger radii and centred on other points of the sky. Using the fast convolution technique enables us to perform this search with higher resolution and a wider range of radii than in previous studies. We show that for one of the two special points rings with radii larger than 10° have systematically lower variance in comparison to the concordance Λ cold dark matter model predictions. However, we show that this deviation is caused by the multipoles up to order ℓ = 7. Therefore, the deficit of power for concentric rings with larger radii is yet another manifestation of the well-known anomalous CMB distribution on large angular scales. Furthermore, low-variance rings can be easily found centred on other points in the sky. In addition, we show also the results of a search for extremely high-variance rings. As for the low-variance rings, some anomalies seem to be related to the anomalous distribution of the low-order multipoles of the WMAP CMB maps. As such our results are not consistent with the conformal cyclic cosmology scenario.
On the measurement of frequency and of its sample variance with high-resolution counters
Rubiola, Enrico
2005-05-15
A frequency counter measures the input frequency {nu} averaged over a suitable time {tau}, versus the reference clock. High resolution is achieved by interpolating the clock signal. Further increased resolution is obtained by averaging multiple frequency measurements highly overlapped. In the presence of additive white noise or white phase noise, the square uncertainty improves from {sigma}{sub {nu}}{sup 2}{proportional_to}1/{tau}{sup 2} to {sigma}{sub {nu}}{sup 2}{proportional_to}1/{tau}{sup 3}. Surprisingly, when a file of contiguous data is fed into the formula of the two-sample (Allan) variance {sigma}{sub y}{sup 2}({tau})=E{l_brace}(1/2)(y{sub k+1}-y{sub k}){sup 2}{r_brace} of the fractional frequency fluctuation y, the result is the modified Allan variance mod {sigma}{sub y}{sup 2}({tau}). But if a sufficient number of contiguous measures are averaged in order to get a longer {tau} and the data are fed into the same formula, the results is the (nonmodified) Allan variance. Of course interpretation mistakes are around the corner if the counter internal process is not well understood. The typical domain of interest is the the short-term stability measurement of oscillators.
The Efficiency of Split Panel Designs in an Analysis of Variance Model
Wang, Wei-Guo; Liu, Hai-Jun
2016-01-01
We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447
Houle, D.; Hughes, K. A.; Hoffmaster, D. K.; Ihara, J.; Assimacopoulos, S.; Canada, D.; Charlesworth, B.
1994-01-01
We have accumulated spontaneous mutations in the absence of natural selection in Drosophila melanogaster by backcrossing 200 heterozygous replicates of a single high fitness second chromosome to a balancer stock for 44 generations. At generations 33 and 44 of accumulation, we extracted samples of chromosomes and assayed their homozygous performance for female fecundity early and late in adult life, male and female longevity, male mating ability early and late in adult life, productivity (a measure of fecundity times viability) and body weight. The variance among lines increased significantly for all traits except male mating ability and weight. The rate of increase in variance was similar to that found in previous studies of egg-to-adult viability, when calculated relative to trait means. The mutational correlations among traits were all strongly positive. Many correlations were significantly different from 0, while none was significantly different from 1. These data suggest that the mutation-accumulation hypothesis is not a sufficient explanation for the evolution of senescence in D. melanogaster. Mutation-selection balance does seem adequate to explain a substantial proportion of the additive genetic variance for fecundity and longevity. PMID:7851773
Modeling Scalar variance from Direct Numerical Simulations of a turbulent mixing layer
NASA Astrophysics Data System (ADS)
Ravinel, Baptiste; Blanquart, Guillaume
2010-11-01
Many studies have focused on analyzing and predicting the mixing of a scalar such as fuel concentration in turbulent flows. However, the subfilter scalar variance in Large Eddy Simulations (LES) still requires additional considerations. The present work aims at obtaining results for the turbulent mixture of a scalar in configurations relevant to reactive flows, i.e. in the presence of mean velocity/scalar gradients. A Direct Numerical Simulation (DNS) of a turbulent mixing layer has been performed by initially combining two boundary layers. The high order conservative finite difference low Mach number NGA code was used together with the BQuick scheme for the transport of mixture fraction. The self-similar nature of the flow and energy spectra have been considered to analyze the turbulent flow field. High order velocity schemes (4th order) were found to play an important role in capturing accurately the mixing of fuel and air. The scalar variance has been calculated by filtering the solution and has been compared to various models usually used in LES. Following an earlier study by Balarac et al. [Phys. Fluids 20 (2008)], the concept of optimal estimators has been considered to identify the set of parameters most suitable to express the subfilter variance. Finally, the quality of the standard dynamic approach has been assessed.
The Efficiency of Split Panel Designs in an Analysis of Variance Model.
Liu, Xin; Wang, Wei-Guo; Liu, Hai-Jun
2016-01-01
We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm's efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447
A Simple, General Result for the Variance of Substitution Number in Molecular Evolution
Houchmandzadeh, Bahram; Vallade, Marcel
2016-01-01
The number of substitutions (of nucleotides, amino acids, etc.) that take place during the evolution of a sequence is a stochastic variable of fundamental importance in the field of molecular evolution. Although the mean number of substitutions during molecular evolution of a sequence can be estimated for a given substitution model, no simple solution exists for the variance of this random variable. We show in this article that the computation of the variance is as simple as that of the mean number of substitutions for both short and long times. Apart from its fundamental importance, this result can be used to investigate the dispersion index R, that is, the ratio of the variance to the mean substitution number, which is of prime importance in the neutral theory of molecular evolution. By investigating large classes of substitution models, we demonstrate that although R≥1, to obtain R significantly larger than unity necessitates in general additional hypotheses on the structure of the substitution model. PMID:27189545
Non-negative least-squares variance component estimation with application to GPS time series
NASA Astrophysics Data System (ADS)
Amiri-Simkooei, A. R.
2016-05-01
The problem of negative variance components is probable to occur in many geodetic applications. This problem can be avoided if non-negativity constraints on variance components (VCs) are introduced to the stochastic model. Based on the standard non-negative least-squares (NNLS) theory, this contribution presents the method of non-negative least-squares variance component estimation (NNLS-VCE). The method is easy to understand, simple to implement, and efficient in practice. The NNLS-VCE is then applied to the coordinate time series of the permanent GPS stations to simultaneously estimate the amplitudes of different noise components such as white noise, flicker noise, and random walk noise. If a noise model is unlikely to be present, its amplitude is automatically estimated to be zero. The results obtained from 350 GPS permanent stations indicate that the noise characteristics of the GPS time series are well described by combination of white noise and flicker noise. This indicates that all time series contain positive noise amplitudes for white and flicker noise. In addition, around two-thirds of the series consist of random walk noise, of which its average amplitude is the (small) value of 0.16, 0.13, and 0.45 { mm/year }^{1/2} for the north, east, and up components, respectively. Also, about half of the positive estimated amplitudes of random walk noise are statistically significant, indicating that one-third of the total time series have significant random walk noise.
Courtiol, Alexandre; Rickard, Ian J.; Lummaa, Virpi; Prentice, Andrew M.; Fulford, Anthony J.C.; Stearns, Stephen C.
2013-01-01
Summary Recent human history is marked by demographic transitions characterized by declines in mortality and fertility [1]. By influencing the variance in those fitness components, demographic transitions can affect selection on other traits [2]. Parallel to changes in selection triggered by demography per se, relationships between fitness and anthropometric traits are also expected to change due to modification of the environment. Here we explore for the first time these two main evolutionary consequences of demographic transitions using a unique data set containing survival, fertility, and anthropometric data for thousands of women in rural Gambia from 1956–2010 [3]. We show how the demographic transition influenced directional selection on height and body mass index (BMI). We observed a change in selection for both traits mediated by variation in fertility: selection initially favored short females with high BMI values but shifted across the demographic transition to favor tall females with low BMI values. We demonstrate that these differences resulted both from changes in fitness variance that shape the strength of selection and from shifts in selective pressures triggered by environmental changes. These results suggest that demographic and environmental trends encountered by current human populations worldwide are likely to modify, but not stop, natural selection in humans. PMID:23623548
Shared genetic variance between obesity and white matter integrity in Mexican Americans
Spieker, Elena A.; Kochunov, Peter; Rowland, Laura M.; Sprooten, Emma; Winkler, Anderson M.; Olvera, Rene L.; Almasy, Laura; Duggirala, Ravi; Fox, Peter T.; Blangero, John; Glahn, David C.; Curran, Joanne E.
2015-01-01
Obesity is a chronic metabolic disorder that may also lead to reduced white matter integrity, potentially due to shared genetic risk factors. Genetic correlation analyses were conducted in a large cohort of Mexican American families in San Antonio (N = 761, 58% females, ages 18–81 years; 41.3 ± 14.5) from the Genetics of Brain Structure and Function Study. Shared genetic variance was calculated between measures of adiposity [(body mass index (BMI; kg/m2) and waist circumference (WC; in)] and whole-brain and regional measurements of cerebral white matter integrity (fractional anisotropy). Whole-brain average and regional fractional anisotropy values for 10 major white matter tracts were calculated from high angular resolution diffusion tensor imaging data (DTI; 1.7 × 1.7 × 3 mm; 55 directions). Additive genetic factors explained intersubject variance in BMI (heritability, h2 = 0.58), WC (h2 = 0.57), and FA (h2 = 0.49). FA shared significant portions of genetic variance with BMI in the genu (ρG = −0.25), body (ρG = −0.30), and splenium (ρG = −0.26) of the corpus callosum, internal capsule (ρG = −0.29), and thalamic radiation (ρG = −0.31) (all p's = 0.043). The strongest evidence of shared variance was between BMI/WC and FA in the superior fronto-occipital fasciculus (ρG = −0.39, p = 0.020; ρG = −0.39, p = 0.030), which highlights region-specific variation in neural correlates of obesity. This may suggest that increase in obesity and reduced white matter integrity share common genetic risk factors. PMID:25763009
The Milieu Intérieur study - an integrative approach for study of human immunological variance.
Thomas, Stéphanie; Rouilly, Vincent; Patin, Etienne; Alanio, Cécile; Dubois, Annick; Delval, Cécile; Marquier, Louis-Guillaume; Fauchoux, Nicolas; Sayegrih, Seloua; Vray, Muriel; Duffy, Darragh; Quintana-Murci, Lluis; Albert, Matthew L
2015-04-01
The Milieu Intérieur Consortium has established a 1000-person healthy population-based study (stratified according to sex and age), creating an unparalleled opportunity for assessing the determinants of human immunologic variance. Herein, we define the criteria utilized for participant enrollment, and highlight the key data that were collected for correlative studies. In this report, we analyzed biological correlates of sex, age, smoking-habits, metabolic score and CMV infection. We characterized and identified unique risk factors among healthy donors, as compared to studies that have focused on the general population or disease cohorts. Finally, we highlight sex-bias in the thresholds used for metabolic score determination and recommend a deeper examination of current guidelines. In sum, our clinical design, standardized sample collection strategies, and epidemiological data analyses have established the foundation for defining variability within human immune responses. PMID:25562703
Unique side effects of interferon.
Aslam, Hina; Qadeer, Rashid; Kashif, Syed Mohammad; Rehan, Muhammad; Afsar, Salahuddin
2015-08-01
Interferon-alpha, a potent mediator of host immune response, has immunomodulatory properties in addition to its antiviral effects. A wide spectrum of autoimmune diseases can occur in patients treated with interferon-alpha for chronic hepatitis B and D, of which clinical systemic lupus erythematosus (SLE) accounts for less than 1% and hypothyroidism for 2-4 %. We report herein a case of a 16-year-old male who developed antinuclear antibody (ANA)-negative SLE and hypothyroidism after treatment with interferon-alpha for chronic hepatitis. High index of suspicion is therefore necessary in all patients treated with interferon for early diagnosis and treatment. PMID:26228341
Manufacturing unique glasses in space
NASA Technical Reports Server (NTRS)
Happe, R. P.
1976-01-01
An air suspension melting technique is described for making glasses from substances which to date have been observed only in the crystalline condition. A laminar flow vertical wind tunnel was constructed for suspending oxide melts that were melted using the energy from a carbon dioxide laser beam. By this method it is possible to melt many high-melting-point materials without interaction between the melt and crucible material. In addition, space melting permits cooling to suppress crystal growth. If a sufficient amount of under cooling is accompanied by a sufficient increase in viscosity, crystallization will be avoided entirely and glass will result.
40 CFR 142.22 - Review of State variances, exemptions and schedules.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Review of State variances, exemptions... State-Issued Variances and Exemptions § 142.22 Review of State variances, exemptions and schedules. (a... regulations the Administrator shall complete a comprehensive review of the variances and exemptions...
29 CFR 4204.21 - Requests to PBGC for variances and exemptions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 9 2010-07-01 2010-07-01 false Requests to PBGC for variances and exemptions. 4204.21... WITHDRAWAL LIABILITY FOR MULTIEMPLOYER PLANS VARIANCES FOR SALE OF ASSETS Procedures for Individual and Class Variances or Exemptions § 4204.21 Requests to PBGC for variances and exemptions. (a) Filing of...
40 CFR 142.21 - State consideration of a variance or exemption request.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false State consideration of a variance or... State-Issued Variances and Exemptions § 142.21 State consideration of a variance or exemption request. A State with primary enforcement responsibility shall act on any variance or exemption request...
29 CFR 4204.11 - Variance of the bond/escrow and sale-contract requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 9 2010-07-01 2010-07-01 false Variance of the bond/escrow and sale-contract requirements... CORPORATION WITHDRAWAL LIABILITY FOR MULTIEMPLOYER PLANS VARIANCES FOR SALE OF ASSETS Variance of the Statutory Requirements § 4204.11 Variance of the bond/escrow and sale-contract requirements. (a)...
Vrshek-Schallhorn, Suzanne; Stroud, Catherine B; Mineka, Susan; Hammen, Constance; Zinbarg, Richard E; Wolitzky-Taylor, Kate; Craske, Michelle G
2015-11-01
Few studies comprehensively evaluate which types of life stress are most strongly associated with depressive episode onsets, over and above other forms of stress, and comparisons between acute and chronic stress are particularly lacking. Past research implicates major (moderate to severe) stressful life events (SLEs), and to a lesser extent, interpersonal forms of stress; research conflicts on whether dependent or independent SLEs are more potent, but theory favors dependent SLEs. The present study used 5 years of annual diagnostic and life stress interviews of chronic stress and SLEs from 2 separate samples (Sample 1 N = 432; Sample 2 N = 146) transitioning into emerging adulthood; 1 sample also collected early adversity interviews. Multivariate analyses simultaneously examined multiple forms of life stress to test hypotheses that all major SLEs, then particularly interpersonal forms of stress, and then dependent SLEs would contribute unique variance to major depressive episode (MDE) onsets. Person-month survival analysis consistently implicated chronic interpersonal stress and major interpersonal SLEs as statistically unique predictors of risk for MDE onset. In addition, follow-up analyses demonstrated temporal precedence for chronic stress; tested differences by gender; showed that recent chronic stress mediates the relationship between adolescent adversity and later MDE onsets; and revealed interactions of several forms of stress with socioeconomic status (SES). Specifically, as SES declined, there was an increasing role for noninterpersonal chronic stress and noninterpersonal major SLEs, coupled with a decreasing role for interpersonal chronic stress. Implications for future etiological research were discussed. PMID:26301973
Vrshek-Schallhorn, Suzanne; Stroud, Catherine B.; Mineka, Susan; Hammen, Constance; Zinbarg, Richard; Wolitzky-Taylor, Kate; Craske, Michelle G.
2016-01-01
Few studies comprehensively evaluate which types of life stress are most strongly associated with depressive episode onsets, over and above other forms of stress, and comparisons between acute and chronic stress are particularly lacking. Past research implicates major (moderate to severe) stressful life events (SLEs), and to a lesser extent, interpersonal forms of stress; research conflicts on whether dependent or independent SLEs are more potent, but theory favors dependent SLEs. The present study used five years of annual diagnostic and life stress interviews of chronic stress and SLEs from two separate samples (Sample 1 N = 432; Sample 2 N = 146) transitioning into emerging adulthood; one sample also collected early adversity interviews. Multivariate analyses simultaneously examined multiple forms of life stress to test hypotheses that all major SLEs, then particularly interpersonal forms of stress, and then dependent SLEs would contribute unique variance to major depressive episode (MDE) onsets. Person-month survival analysis consistently implicated chronic interpersonal stress and major interpersonal SLEs as statistically unique predictors of risk for MDE onset. In addition, follow-up analyses demonstrated temporal precedence for chronic stress; tested differences by gender; showed that recent chronic stress mediates the relationship between adolescent adversity and later MDE onsets; and revealed interactions of several forms of stress with socioeconomic status (SES). Specifically, as SES declined, there was an increasing role for non-interpersonal chronic stress and non-interpersonal major SLEs, coupled with a decreasing role for interpersonal chronic stress. Implications for future etiological research were discussed. PMID:26301973
Putka, Dan J; Hoffman, Brian J
2013-01-01
Though considerable research has evaluated the functioning of assessment center (AC) ratings, surprisingly little research has articulated and uniquely estimated the components of reliable and unreliable variance that underlie such ratings. The current study highlights limitations of existing research for estimating components of reliable and unreliable variance in AC ratings. It provides a comprehensive empirical decomposition of variance in AC ratings that: (a) explicitly accounts for assessee-, dimension-, exercise-, and assessor-related effects, (b) does so with 3 large sets of operational data from a multiyear AC program, and (c) avoids many analytic limitations and confounds that have plagued the AC literature to date. In doing so, results show that (a) the extant AC literature has masked the contribution of sizable, substantively meaningful sources of variance in AC ratings, (b) various forms of assessor bias largely appear trivial, and (c) there is far more systematic, nuanced variance present in AC ratings than previous research indicates. Furthermore, this study also illustrates how the composition of reliable and unreliable variance heavily depends on the level to which assessor ratings are aggregated (e.g., overall AC-level, dimension-level, exercise-level) and the generalizations one desires to make based on those ratings. The implications of this study for future AC research and practice are discussed. PMID:23244226
Unique interactive projection display screen
Veligdan, J.T.
1997-11-01
Projection systems continue to be the best method to produce large (1 meter and larger) displays. However, in order to produce a large display, considerable volume is typically required. The Polyplanar Optic Display (POD) is a novel type of projection display screen, which for the first time, makes it possible to produce a large projection system that is self-contained and only inches thick. In addition, this display screen is matte black in appearance allowing it to be used in high ambient light conditions. This screen is also interactive and can be remotely controlled via an infrared optical pointer resulting in mouse-like control of the display. Furthermore, this display need not be flat since it can be made curved to wrap around a viewer as well as being flexible.
Variance reduction methods applied to deep-penetration problems
Cramer, S.N.
1984-01-01
All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course.
Identifiability, stratification and minimum variance estimation of causal effects.
Tong, Xingwei; Zheng, Zhongguo; Geng, Zhi
2005-10-15
The weakest sufficient condition for the identifiability of causal effects is the weakly ignorable treatment assignment, which implies that potential responses are independent of treatment assignment in each fine subpopulation stratified by a covariate. In this paper, we expand the independence that holds in fine subpopulations to the case that the independence may also hold in several coarse subpopulations, each of which consists of several fine subpopulations and may have overlaps with other coarse subpopulations. We first show that the identifiability of causal effects occurs if and only if the coarse subpopulations partition the whole population. We then propose a principle, called minimum variance principle, which says that the estimator possessing the minimum variance is preferred, in dealing with the stratification and the estimation of the causal effects. The simulation results with the detail programming and a practical example demonstrate that it is a feasible and reasonable way to achieve our goals. PMID:16149123
Compounding approach for univariate time series with nonstationary variances.
Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich
2015-12-01
A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances. PMID:26764768
Female copying increases the variance in male mating success.
Wade, M J; Pruett-Jones, S G
1990-08-01
Theoretical models of sexual selection assume that females choose males independently of the actions and choice of other individual females. Variance in male mating success in promiscuous species is thus interpreted as a result of phenotypic differences among males which females perceive and to which they respond. Here we show that, if some females copy the behavior of other females in choosing mates, the variance in male mating success and therefore the opportunity for sexual selection is greatly increased. Copying behavior is most likely in non-resource-based harem and lek mating systems but may occur in polygynous, territorial systems as well. It can be shown that copying behavior by females is an adaptive alternative to random choice whenever there is a cost to mate choice. We develop a statistical means of estimating the degree of female copying in natural populations where it occurs. PMID:2377613
Response variance in functional maps: neural darwinism revisited.
Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei
2013-01-01
The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population. PMID:23874733
Compounding approach for univariate time series with nonstationary variances
NASA Astrophysics Data System (ADS)
Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich
2015-12-01
A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.
Fidelity between Gaussian mixed states with quantum state quadrature variances
NASA Astrophysics Data System (ADS)
Hai-Long, Zhang; Chun, Zhou; Jian-Hong, Shi; Wan-Su, Bao
2016-04-01
In this paper, from the original definition of fidelity in a pure state, we first give a well-defined expansion fidelity between two Gaussian mixed states. It is related to the variances of output and input states in quantum information processing. It is convenient to quantify the quantum teleportation (quantum clone) experiment since the variances of the input (output) state are measurable. Furthermore, we also give a conclusion that the fidelity of a pure input state is smaller than the fidelity of a mixed input state in the same quantum information processing. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002) and the Foundation of Science and Technology on Information Assurance Laboratory (Grant No. KJ-14-001).
A surface layer variance heat budget for ENSO
NASA Astrophysics Data System (ADS)
Boucharel, Julien; Timmermann, Axel; Santoso, Agus; England, Matthew H.; Jin, Fei-Fei; Balmaseda, Magdalena A.
2015-05-01
Characteristics of the El Niño-Southern Oscillation (ENSO), such as frequency, propagation, spatial extent, and amplitude, strongly depend on the climatological background state of the tropical Pacific. Multidecadal changes in the ocean mean state are hence likely to modulate ENSO properties. To better link background state variations with low-frequency amplitude changes of ENSO, we develop a diagnostic framework that determines locally the contributions of different physical feedback terms on the ocean surface temperature variance. Our analysis shows that multidecadal changes of ENSO variance originate from the delicate balance between the background-state-dependent positive thermocline feedback and the atmospheric damping of sea surface temperatures anomalies. The role of higher-order processes and atmospheric and oceanic nonlinearities is also discussed. The diagnostic tool developed here can be easily applied to other tropical ocean areas and climate phenomena.
A Variance Based Active Learning Approach for Named Entity Recognition
NASA Astrophysics Data System (ADS)
Hassanzadeh, Hamed; Keyvanpour, Mohammadreza
The cost of manually annotating corpora is one of the significant issues in many text based tasks such as text mining, semantic annotation and generally information extraction. Active Learning is an approach that deals with reduction of labeling costs. In this paper we proposed an effective active learning approach based on minimal variance that reduces manual annotation cost by using a small number of manually labeled examples. In our approach we use a confidence measure based on the model's variance that reaches a considerable accuracy for annotating entities. Conditional Random Field (CRF) is chosen as the underlying learning model due to its promising performance in many sequence labeling tasks. The experiments show that the proposed method needs considerably fewer manual labeled samples to produce a desirable result.
No evidence for anomalously low variance circles on the sky
Moss, Adam; Scott, Douglas; Zibin, James P. E-mail: dscott@phas.ubc.ca
2011-04-01
In a recent paper, Gurzadyan and Penrose claim to have found directions on the sky centred on which are circles of anomalously low variance in the cosmic microwave background (CMB). These features are presented as evidence for a particular picture of the very early Universe. We attempted to repeat the analysis of these authors, and we can indeed confirm that such variations do exist in the temperature variance for annuli around points in the data. However, we find that this variation is entirely expected in a sky which contains the usual CMB anisotropies. In other words, properly simulated Gaussian CMB data contain just the sorts of variations claimed. Gurzadyan and Penrose have not found evidence for pre-Big Bang phenomena, but have simply re-discovered that the CMB contains structure.
Variance estimation for radiation analysis and multi-sensor fusion.
Mitchell, Dean James
2010-09-01
Variance estimates that are used in the analysis of radiation measurements must represent all of the measurement and computational uncertainties in order to obtain accurate parameter and uncertainty estimates. This report describes an approach for estimating components of the variance associated with both statistical and computational uncertainties. A multi-sensor fusion method is presented that renders parameter estimates for one-dimensional source models based on input from different types of sensors. Data obtained with multiple types of sensors improve the accuracy of the parameter estimates, and inconsistencies in measurements are also reflected in the uncertainties for the estimated parameter. Specific analysis examples are presented that incorporate a single gross neutron measurement with gamma-ray spectra that contain thousands of channels. The parameter estimation approach is tolerant of computational errors associated with detector response functions and source model approximations.
Methods for variance reduction in Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Bixler, Joel N.; Hokr, Brett H.; Winblad, Aidan; Elpers, Gabriel; Zollars, Byron; Thomas, Robert J.
2016-03-01
Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, due to the probabilistic nature of these simulations, large numbers of photons are often required in order to generate relevant results. Here, we present methods for reduction in the variance of dose distribution in a computational volume. Dose distribution is computed via tracing of a large number of rays, and tracking the absorption and scattering of the rays within discrete voxels that comprise the volume. Variance reduction is shown here using quasi-random sampling, interaction forcing for weakly scattering media, and dose smoothing via bi-lateral filtering. These methods, along with the corresponding performance enhancements are detailed here.
ERIC Educational Resources Information Center
Liu, Duo; Chen, Xi; Chung, Kevin K. H.
2015-01-01
This study examined the relation between the performance in a visual search task and reading ability in 92 third-grade Hong Kong Chinese children. The visual search task, which is considered a measure of visual-spatial attention, accounted for unique variance in Chinese character reading after controlling for age, nonverbal intelligence,…
Constraining the local variance of H0 from directional analyses
NASA Astrophysics Data System (ADS)
Bengaly, C. A. P., Jr.
2016-04-01
We evaluate the local variance of the Hubble Constant H0 with low-z Type Ia Supernovae (SNe). Our analyses are performed using a hemispherical comparison method in order to test whether taking the bulk flow motion into account can reconcile the measurement of the Hubble Constant H0 from standard candles (H0 = 73.8±2.4 km s-1 Mpc -1) with that of the Planck's Cosmic Microwave Background data (H0 = 67.8 ± 0.9km s-1 Mpc-1). We obtain that H0 ranges from 68.9±0.5 km s-1 Mpc-1 to 71.2±0.7 km s-1 Mpc-1 through the celestial sphere (1σ uncertainty), implying a Hubble Constant maximal variance of δH0 = (2.30±0.86) km s-1 Mpc-1 towards the (l,b) = (315°,27°) direction. Interestingly, this result agrees with the bulk flow direction estimates found in the literature, as well as previous evaluations of the H0 variance due to the presence of nearby inhomogeneities. We assess the statistical significance of this result with different prescriptions of Monte Carlo simulations, obtaining moderate statistical significance, i.e., 68.7% confidence level (CL) for such variance. Furthermore, we test the hypothesis of a higher H0 value in the presence of a bulk flow velocity dipole, finding some evidence for this result which, however, cannot be claimed to be significant due to the current large uncertainty in the SNe distance modulus. Then, we conclude that the tension between different H0 determinations can plausibly be caused to the bulk flow motion of the local Universe, even though the current incompleteness of the SNe data set, both in terms of celestial coverage and distance uncertainties, does not allow a high statistical significance for these results or a definitive conclusion about this issue.
End-state comfort and joint configuration variance during reaching
Solnik, Stanislaw; Pazin, Nemanja; Coelho, Chase J.; Rosenbaum, David A.; Scholz, John P.; Zatsiorsky, Vladimir M.; Latash, Mark L.
2013-01-01
This study joined two approaches to motor control. The first approach comes from cognitive psychology and is based on the idea that goal postures and movements are chosen to satisfy task-specific constraints. The second approach comes from the principle of motor abundance and is based on the idea that control of apparently redundant systems is associated with the creation of multi-element synergies stabilizing important performance variables. The first approach has been tested by relying on psychophysical ratings of comfort. The second approach has been tested by estimating variance along different directions in the space of elemental variables such as joint postures. The two approaches were joined here. Standing subjects performed series of movements in which they brought a hand-held pointer to each of four targets oriented within a frontal plane, close to or far from the body. The subjects were asked to rate the comfort of the final postures, and the variance of their joint configurations during the steady state following pointing was quantified with respect to pointer endpoint position and pointer orientation. The subjects showed consistent patterns of comfort ratings among the targets, and all movements were characterized by multi-joint synergies stabilizing both pointer endpoint position and orientation. Contrary to what was expected, less comfortable postures had higher joint configuration variance than did more comfortable postures without major changes in the synergy indices. Multi-joint synergies stabilized the pointer position and orientation similarly across a range of comfortable/uncomfortable postures. The results are interpreted in terms conducive to the two theoretical frameworks underlying this work, one focusing on comfort ratings reflecting mean postures adopted for different targets and the other focusing on indices of joint configuration variance. PMID:23288326
The Third-Difference Approach to Modified Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
1995-01-01
This study gives strategies for estimating the modified Allan variance (mvar) and formulas for computing the equivalent degrees of freedom (edf) of the estimators. A third-difference formulation of mvar leads to a tractable formula for edf in the presence of power-law phase noise. The effect of estimation stride on edf is tabulated. First-degree rational-function approximations for edf are derived.
Analysis of Variance in the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
Deloach, Richard
2010-01-01
This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.
NASA Technical Reports Server (NTRS)
1999-01-01
Mainstream Engineering Corporation was awarded Phase I and Phase II contracts from Goddard Space Flight Center's Small Business Innovation Research (SBIR) program in early 1990. With support from the SBIR program, Mainstream Engineering Corporation has developed a unique low cost additive, QwikBoost (TM), that increases the performance of air conditioners, heat pumps, refrigerators, and freezers. Because of the energy and environmental benefits of QwikBoost, Mainstream received the Tibbetts Award at a White House Ceremony on October 16, 1997. QwikBoost was introduced at the 1998 International Air Conditioning, Heating, and Refrigeration Exposition. QwikBoost is packaged in a handy 3-ounce can (pressurized with R-134a) and will be available for automotive air conditioning systems in summer 1998.
Using Localization Constraints for the Unique Reconstruction of Magnetizations
NASA Astrophysics Data System (ADS)
Gerhards, C.
2015-12-01
In general, the reconstruction of (vertically integrated) magnetizations on the Earth's surface from the knowledge of the corresponding magnetic field in the exterior of the Earth is highly non-unique. However, we show that if one assumes that the magnetization vanishes in a certain region (i.e., if it is locally supported), this can improve the non-uniqueness issues of reconstructing the magnetization. In particular, induced magnetizations for which the inducing field (i.e., the Earth's main magnetic field) is known can be recovered uniquely. In the case of general (vertically integrated) magnetizations, one does not get uniqueness but one can at least recover more contributions than without the additional assumption of local support. We illustrate the results by some examples.
Cosmic variance of the galaxy cluster weak lensing signal
NASA Astrophysics Data System (ADS)
Gruen, D.; Seitz, S.; Becker, M. R.; Friedrich, O.; Mana, A.
2015-06-01
Intrinsic variations of the projected density profiles of clusters of galaxies at fixed mass are a source of uncertainty for cluster weak lensing. We present a semi-analytical model to account for this effect, based on a combination of variations in halo concentration, ellipticity and orientation, and the presence of correlated haloes. We calibrate the parameters of our model at the 10 per cent level to match the empirical cosmic variance of cluster profiles at M_{200m}≈ 10^{14}ldots 10^{15} h^{-1}{ M_{⊙}}, z = 0.25…0.5 in a cosmological simulation. We show that weak lensing measurements of clusters significantly underestimate mass uncertainties if intrinsic profile variations are ignored, and that our model can be used to provide correct mass likelihoods. Effects on the achievable accuracy of weak lensing cluster mass measurements are particularly strong for the most massive clusters and deep observations (with ≈20 per cent uncertainty from cosmic variance alone at M_{200m}≈ 10^{15} h^{-1}{ M_{⊙}} and z = 0.25), but significant also under typical ground-based conditions. We show that neglecting intrinsic profile variations leads to biases in the mass-observable relation constrained with weak lensing, both for intrinsic scatter and overall scale (the latter at the 15 per cent level). These biases are in excess of the statistical errors of upcoming surveys and can be avoided if the cosmic variance of cluster profiles is accounted for.
Asymptotically robust variance estimation for person-time incidence rates.
Scosyrev, Emil
2016-05-01
Person-time incidence rates are frequently used in medical research. However, standard estimation theory for this measure of event occurrence is based on the assumption of independent and identically distributed (iid) exponential event times, which implies that the hazard function remains constant over time. Under this assumption and assuming independent censoring, observed person-time incidence rate is the maximum-likelihood estimator of the constant hazard, and asymptotic variance of the log rate can be estimated consistently by the inverse of the number of events. However, in many practical applications, the assumption of constant hazard is not very plausible. In the present paper, an average rate parameter is defined as the ratio of expected event count to the expected total time at risk. This rate parameter is equal to the hazard function under constant hazard. For inference about the average rate parameter, an asymptotically robust variance estimator of the log rate is proposed. Given some very general conditions, the robust variance estimator is consistent under arbitrary iid event times, and is also consistent or asymptotically conservative when event times are independent but nonidentically distributed. In contrast, the standard maximum-likelihood estimator may become anticonservative under nonconstant hazard, producing confidence intervals with less-than-nominal asymptotic coverage. These results are derived analytically and illustrated with simulations. The two estimators are also compared in five datasets from oncology studies. PMID:26439107
Relationship between Allan variances and Kalman Filter parameters
NASA Technical Reports Server (NTRS)
Vandierendonck, A. J.; Mcgraw, J. B.; Brown, R. G.
1984-01-01
A relationship was constructed between the Allan variance parameters (H sub z, H sub 1, H sub 0, H sub -1 and H sub -2) and a Kalman Filter model that would be used to estimate and predict clock phase, frequency and frequency drift. To start with the meaning of those Allan Variance parameters and how they are arrived at for a given frequency source is reviewed. Although a subset of these parameters is arrived at by measuring phase as a function of time rather than as a spectral density, they all represent phase noise spectral density coefficients, though not necessarily that of a rational spectral density. The phase noise spectral density is then transformed into a time domain covariance model which can then be used to derive the Kalman Filter model parameters. Simulation results of that covariance model are presented and compared to clock uncertainties predicted by Allan variance parameters. A two state Kalman Filter model is then derived and the significance of each state is explained.
Estimating Predictive Variance for Statistical Gas Distribution Modelling
Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo
2009-05-23
Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.
Quantifying variances in comparative RNA secondary structure prediction
2013-01-01
Background With the advancement of next-generation sequencing and transcriptomics technologies, regulatory effects involving RNA, in particular RNA structural changes are being detected. These results often rely on RNA secondary structure predictions. However, current approaches to RNA secondary structure modelling produce predictions with a high variance in predictive accuracy, and we have little quantifiable knowledge about the reasons for these variances. Results In this paper we explore a number of factors which can contribute to poor RNA secondary structure prediction quality. We establish a quantified relationship between alignment quality and loss of accuracy. Furthermore, we define two new measures to quantify uncertainty in alignment-based structure predictions. One of the measures improves on the “reliability score” reported by PPfold, and considers alignment uncertainty as well as base-pair probabilities. The other measure considers the information entropy for SCFGs over a space of input alignments. Conclusions Our predictive accuracy improves on the PPfold reliability score. We can successfully characterize many of the underlying reasons for and variances in poor prediction. However, there is still variability unaccounted for, which we therefore suggest comes from the RNA secondary structure predictive model itself. PMID:23634662
Reduced Variance for Material Sources in Implicit Monte Carlo
Urbatsch, Todd J.
2012-06-25
Implicit Monte Carlo (IMC), a time-implicit method due to Fleck and Cummings, is used for simulating supernovae and inertial confinement fusion (ICF) systems where x-rays tightly and nonlinearly interact with hot material. The IMC algorithm represents absorption and emission within a timestep as an effective scatter. Similarly, the IMC time-implicitness splits off a portion of a material source directly into the radiation field. We have found that some of our variance reduction and particle management schemes will allow large variances in the presence of small, but important, material sources, as in the case of ICF hot electron preheat sources. We propose a modification of our implementation of the IMC method in the Jayenne IMC Project. Instead of battling the sampling issues associated with a small source, we bypass the IMC implicitness altogether and simply deterministically update the material state with the material source if the temperature of the spatial cell is below a user-specified cutoff. We describe the modified method and present results on a test problem that show the elimination of variance for small sources.
VAPOR: variance-aware per-pixel optimal resource allocation.
Eisenberg, Yiftach; Zhai, Fan; Pappas, Thrasyvoulos N; Berry, Randall; Katsaggelos, Aggelos K
2006-02-01
Characterizing the video quality seen by an end-user is a critical component of any video transmission system. In packet-based communication systems, such as wireless channels or the Internet, packet delivery is not guaranteed. Therefore, from the point-of-view of the transmitter, the distortion at the receiver is a random variable. Traditional approaches have primarily focused on minimizing the expected value of the end-to-end distortion. This paper explores the benefits of accounting for not only the mean, but also the variance of the end-to-end distortion when allocating limited source and channel resources. By accounting for the variance of the distortion, the proposed approach increases the reliability of the system by making it more likely that what the end-user sees, closely resembles the mean end-to-end distortion calculated at the transmitter. Experimental results demonstrate that variance-aware resource allocation can help limit error propagation and is more robust to channel-mismatch than approaches whose goal is to strictly minimize the expected distortion. PMID:16479799
Cavalié, Olivier; Vernotte, François
2016-04-01
The Allan variance was introduced 50 years ago for analyzing the stability of frequency standards. In addition to its metrological interest, it may be also considered as an estimator of the large trends of the power spectral density (PSD) of frequency deviation. For instance, the Allan variance is able to discriminate different types of noise characterized by different power laws in the PSD. The Allan variance was also used in other fields than time and frequency metrology: for more than 20 years, it has been used in accelerometry, geophysics, geodesy, astrophysics, and even finances. However, it seems that up to now, it has been exclusively applied for time series analysis. We propose here to use the Allan variance on spatial data. Interferometric synthetic aperture radar (InSAR) is used in geophysics to image ground displacements in space [over the synthetic aperture radar (SAR) image spatial coverage] and in time thanks to the regular SAR image acquisitions by dedicated satellites. The main limitation of the technique is the atmospheric disturbances that affect the radar signal while traveling from the sensor to the ground and back. In this paper, we propose to use the Allan variance for analyzing spatial data from InSAR measurements. The Allan variance was computed in XY mode as well as in radial mode for detecting different types of behavior for different space-scales, in the same way as the different types of noise versus the integration time in the classical time and frequency application. We found that radial Allan variance is the more appropriate way to have an estimator insensitive to the spatial axis and we applied it on SAR data acquired over eastern Turkey for the period 2003-2011. Spatial Allan variance allowed us to well characterize noise features, classically found in InSAR such as phase decorrelation producing white noise or atmospheric delays, behaving like a random walk signal. We finally applied the spatial Allan variance to an InSAR time
The bacterial magnetosome: a unique prokaryotic organelle.
Lower, Brian H; Bazylinski, Dennis A
2013-01-01
The bacterial magnetosome is a unique prokaryotic organelle comprising magnetic mineral crystals surrounded by a phospholipid bilayer. These inclusions are biomineralized by the magnetotactic bacteria which are ubiquitous, aquatic, motile microorganisms. Magnetosomes cause cells of magnetotactic bacteria to passively align and swim along the Earth's magnetic field lines, as miniature motile compass needles. These specialized compartments consist of a phospholipid bilayer membrane surrounding magnetic crystals of magnetite (Fe3O4) or greigite (Fe3S4). The morphology of these membrane-bound crystals varies by species with a nominal magnetic domain size between 35 and 120 nm. Almost all magnetotactic bacteria arrange their magnetosomes in a chain within the cell there by maximizing the magnetic dipole moment of the cell. It is presumed that magnetotactic bacteria use magnetotaxis in conjunction with chemotaxis to locate and maintain an optimum position for growth and survival based on chemistry, redox and physiology in aquatic habitats with vertical chemical concentration and redox gradients. The biosynthesis of magnetosomes is a complex process that involves several distinct steps including cytoplasmic membrane modifications, iron uptake and transport, initiation of crystallization, crystal maturation and magnetosome chain formation. While many mechanistic details remain unresolved, magnetotactic bacteria appear to contain the genetic determinants for magnetosome biomineralization within their genomes in clusters of genes that make up what is referred to as the magnetosome gene island in some species. In addition, magnetosomes contain a unique set of proteins, not present in other cellular fractions, which control the biomineralization process. Through the development of genetic systems, proteomic and genomic work, and the use of molecular and biochemical tools, the functions of a number of magnetosome membrane proteins have been demonstrated and the molecular
Automated Variance Reduction Applied to Nuclear Well-Logging Problems
Wagner, John C; Peplow, Douglas E.; Evans, Thomas M
2009-01-01
The Monte Carlo method enables detailed, explicit geometric, energy and angular representations, and hence is considered to be the most accurate method available for solving complex radiation transport problems. Because of its associated accuracy, the Monte Carlo method is widely used in the petroleum exploration industry to design, benchmark, and simulate nuclear well-logging tools. Nuclear well-logging tools, which contain neutron and/or gamma sources and two or more detectors, are placed in boreholes that contain water (and possibly other fluids) and that are typically surrounded by a formation (e.g., limestone, sandstone, calcites, or a combination). The response of the detectors to radiation returning from the surrounding formation is used to infer information about the material porosity, density, composition, and associated characteristics. Accurate computer simulation is a key aspect of this exploratory technique. However, because this technique involves calculating highly precise responses (at two or more detectors) based on radiation that has interacted with the surrounding formation, the transport simulations are computationally intensive, requiring significant use of variance reduction techniques, parallel computing, or both. Because of the challenging nature of these problems, nuclear well-logging problems have frequently been used to evaluate the effectiveness of variance reduction techniques (e.g., Refs. 1-4). The primary focus of these works has been on improving the computational efficiency associated with calculating the response at the most challenging detector location, which is typically the detector furthest from the source. Although the objective of nuclear well-logging simulations is to calculate the response at multiple detector locations, until recently none of the numerous variance reduction methods/techniques has been well-suited to simultaneous optimization of multiple detector (tally) regions. Therefore, a separate calculation is
Automated Variance Reduction Applied to Nuclear Well-Logging Problems
Wagner, John C; Peplow, Douglas E.; Evans, Thomas M
2008-01-01
The Monte Carlo method enables detailed, explicit geometric, energy and angular representations, and hence is considered to be the most accurate method available for solving complex radiation transport problems. Because of its associated accuracy, the Monte Carlo method is widely used in the petroleum exploration industry to design, benchmark, and simulate nuclear well-logging tools. Nuclear well-logging tools, which contain neutron and/or gamma sources and two or more detectors, are placed in boreholes that contain water (and possibly other fluids) and that are typically surrounded by a formation (e.g., limestone, sandstone, calcites, or a combination). The response of the detectors to radiation returning from the surrounding formation is used to infer information about the material porosity, density, composition, and associated characteristics. Accurate computer simulation is a key aspect of this exploratory technique. However, because this technique involves calculating highly precise responses (at two or more detectors) based on radiation that has interacted with the surrounding formation, the transport simulations are computationally intensive, requiring significant use of variance reduction techniques, parallel computing, or both. Because of the challenging nature of these problems, nuclear well-logging problems have frequently been used to evaluate the effectiveness of variance reduction techniques (e.g., Refs. 1-4). The primary focus of these works has been on improving the computational efficiency associated with calculating the response at the most challenging detector location, which is typically the detector furthest from the source. Although the objective of nuclear well-logging simulations is to calculate the response at multiple detector locations, until recently none of the numerous variance reduction methods/techniques has been well-suited to simultaneous optimization of multiple detector (tally) regions. Therefore, a separate calculation is
NASA Astrophysics Data System (ADS)
Yang, Kai; Huang, Shih-Ying C.; Packard, Nathan J.; Boone, John M.
2009-02-01
Cone-beam systems designed for breast cancer detection bear a unique radiation dose limitation and are vulnerable to the additive noise from the detector. Additive noise is the signal fluctuation from detector elements and is independent of the incident exposure level. In this study, two different approaches (single pixel based and region of interest based) to measure the additive noise were explored using continuously acquired air images at different exposure levels, with both raw images and flat-field corrected images. The influence from two major factors, inter-pixel variance and image lag, were studied. The pixel variance measured from dark images was used as the gold standard (for the entire detector 15.12+/-1.3 ADU2) for comparison. Image noise propagation through reconstruction procedures was also investigated and a mathematically derived quadratic relationship between the image noise and the inverse of the radiation dose was confirmed with experiment data. The additive noise level was proved to affect the CT image noise as the second order coefficient and thus determines the lower limit of the scan radiation dose, above which the scanner operates at quantum limited region and utilizes the x-ray photon most efficiently.
FRESIP project observations of cataclysmic variables: A unique opportunity
NASA Technical Reports Server (NTRS)
Howell, Steve B.
1994-01-01
FRESIP Project observations of cataclysmic variables would provide unique data sets. In the study of known cataclysmic variables they would provide extended, well sampled temporal photometric information and in addition, they would provide a large area deep survey; obtaining a complete magnitude limited sample of the galaxy in the volume cone defined by the FRESIP field of view.
Unraveling Additive from Nonadditive Effects Using Genomic Relationship Matrices
Muñoz, Patricio R.; Resende, Marcio F. R.; Gezan, Salvador A.; Resende, Marcos Deon Vilela; de los Campos, Gustavo; Kirst, Matias; Huber, Dudley; Peter, Gary F.
2014-01-01
The application of quantitative genetics in plant and animal breeding has largely focused on additive models, which may also capture dominance and epistatic effects. Partitioning genetic variance into its additive and nonadditive components using pedigree-based models (P-genomic best linear unbiased predictor) (P-BLUP) is difficult with most commonly available family structures. However, the availability of dense panels of molecular markers makes possible the use of additive- and dominance-realized genomic relationships for the estimation of variance components and the prediction of genetic values (G-BLUP). We evaluated height data from a multifamily population of the tree species Pinus taeda with a systematic series of models accounting for additive, dominance, and first-order epistatic interactions (additive by additive, dominance by dominance, and additive by dominance), using either pedigree- or marker-based information. We show that, compared with the pedigree, use of realized genomic relationships in marker-based models yields a substantially more precise separation of additive and nonadditive components of genetic variance. We conclude that the marker-based relationship matrices in a model including additive and nonadditive effects performed better, improving breeding value prediction. Moreover, our results suggest that, for tree height in this population, the additive and nonadditive components of genetic variance are similar in magnitude. This novel result improves our current understanding of the genetic control and architecture of a quantitative trait and should be considered when developing breeding strategies. PMID:25324160
Monte Carlo calculation of specific absorbed fractions: variance reduction techniques
NASA Astrophysics Data System (ADS)
Díaz-Londoño, G.; García-Pareja, S.; Salvat, F.; Lallena, A. M.
2015-04-01
The purpose of the present work is to calculate specific absorbed fractions using variance reduction techniques and assess the effectiveness of these techniques in improving the efficiency (i.e. reducing the statistical uncertainties) of simulation results in cases where the distance between the source and the target organs is large and/or the target organ is small. The variance reduction techniques of interaction forcing and an ant colony algorithm, which drives the application of splitting and Russian roulette, were applied in Monte Carlo calculations performed with the code penelope for photons with energies from 30 keV to 2 MeV. In the simulations we used a mathematical phantom derived from the well-known MIRD-type adult phantom. The thyroid gland was assumed to be the source organ and urinary bladder, testicles, uterus and ovaries were considered as target organs. Simulations were performed, for each target organ and for photons with different energies, using these variance reduction techniques, all run on the same processor and during a CPU time of 1.5 · 105 s. For energies above 100 keV both interaction forcing and the ant colony method allowed reaching relative uncertainties of the average absorbed dose in the target organs below 4% in all studied cases. When these two techniques were used together, the uncertainty was further reduced, by a factor of 0.5 or less. For photons with energies below 100 keV, an adapted initialization of the ant colony algorithm was required. By using interaction forcing and the ant colony algorithm, realistic values of the specific absorbed fractions can be obtained with relative uncertainties small enough to permit discriminating among simulations performed with different Monte Carlo codes and phantoms. The methodology described in the present work can be employed to calculate specific absorbed fractions for arbitrary arrangements, i.e. energy spectrum of primary radiation, phantom model and source and target organs.
Regression between earthquake magnitudes having errors with known variances
NASA Astrophysics Data System (ADS)
Pujol, Jose
2016-06-01
Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.
MC Estimator Variance Reduction with Antithetic and Common Random Fields
NASA Astrophysics Data System (ADS)
Guthke, P.; Bardossy, A.
2011-12-01
Monte Carlo methods are widely used to estimate the outcome of complex physical models. For physical models with spatial parameter uncertainty, it is common to apply spatial random functions to the uncertain variables, which can then be used to interpolate between known values or to simulate a number of equally likely realizations .The price, that has to be paid for such a stochastic approach, are many simulations of the physical model instead of just running one model with one 'best' input parameter set. The number of simulations is often limited because of computational constraints, so that a modeller has to make a compromise between the benefit in terms of an increased accuracy of the results and the effort in terms of a massively increased computational time. Our objective is, to reduce the estimator variance of dependent variables in Monte Carlo frameworks. Therefore, we adapt two variance reduction techniques (antithetic variates and common random numbers) to a sequential random field simulation scheme that uses copulas as spatial dependence functions. The proposed methodology leads to pairs of spatial random fields with special structural properties, that are advantageous in MC frameworks. Antithetic Random fields (ARF) exhibit a reversed structure on the large scale, while the dependence on the local scale is preserved. Common random fields (CRF) show the same large scale structures, but different spatial dependence on the local scale. The performances of the proposed methods are examined with two typical applications of stochastic hydrogeology. It is shown, that ARF have the property to massively reduce the number of simulation runs required for convergence in Monte Carlo frameworks while keeping the same accuracy in terms of estimator variance. Furthermore, in multi-model frameworks like in sensitivity analysis of the spatial structure, where more than one spatial dependence model is used, the influence of different dependence structures becomes obvious
Reducing sample variance: halo biasing, non-linearity and stochasticity
NASA Astrophysics Data System (ADS)
Gil-Marín, Héctor; Wagner, Christian; Verde, Licia; Jimenez, Raul; Heavens, Alan F.
2010-09-01
Comparing clustering of differently biased tracers of the dark matter distribution offers the opportunity to reduce the sample or cosmic variance error in the measurement of certain cosmological parameters. We develop a formalism that includes bias non-linearities and stochasticity. Our formalism is general enough that it can be used to optimize survey design and tracers selection and optimally split (or combine) tracers to minimize the error on the cosmologically interesting quantities. Our approach generalizes the one presented by McDonald & Seljak of circumventing sample variance in the measurement of f ≡ d lnD/d lna. We analyse how the bias, the noise, the non-linearity and stochasticity affect the measurements of Df and explore in which signal-to-noise regime it is significantly advantageous to split a galaxy sample in two differently biased tracers. We use N-body simulations to find realistic values for the parameters describing the bias properties of dark matter haloes of different masses and their number density. We find that, even if dark matter haloes could be used as tracers and selected in an idealized way, for realistic haloes, the sample variance limit can be reduced only by up to a factor σ2tr/σ1tr ~= 0.6. This would still correspond to the gain from a three times larger survey volume if the two tracers were not to be split. Before any practical application one should bear in mind that these findings apply to dark matter haloes as tracers, while realistic surveys would select galaxies: the galaxy-host halo relation is likely to introduce extra stochasticity, which may reduce the gain further.
Fringe biasing: A variance reduction technique for optically thick meshes
Smedley-Stevenson, R. P.
2013-07-01
Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)
Monte Carlo calculation of specific absorbed fractions: variance reduction techniques.
Díaz-Londoño, G; García-Pareja, S; Salvat, F; Lallena, A M
2015-04-01
The purpose of the present work is to calculate specific absorbed fractions using variance reduction techniques and assess the effectiveness of these techniques in improving the efficiency (i.e. reducing the statistical uncertainties) of simulation results in cases where the distance between the source and the target organs is large and/or the target organ is small. The variance reduction techniques of interaction forcing and an ant colony algorithm, which drives the application of splitting and Russian roulette, were applied in Monte Carlo calculations performed with the code penelope for photons with energies from 30 keV to 2 MeV. In the simulations we used a mathematical phantom derived from the well-known MIRD-type adult phantom. The thyroid gland was assumed to be the source organ and urinary bladder, testicles, uterus and ovaries were considered as target organs. Simulations were performed, for each target organ and for photons with different energies, using these variance reduction techniques, all run on the same processor and during a CPU time of 1.5 · 10(5) s. For energies above 100 keV both interaction forcing and the ant colony method allowed reaching relative uncertainties of the average absorbed dose in the target organs below 4% in all studied cases. When these two techniques were used together, the uncertainty was further reduced, by a factor of 0.5 or less. For photons with energies below 100 keV, an adapted initialization of the ant colony algorithm was required. By using interaction forcing and the ant colony algorithm, realistic values of the specific absorbed fractions can be obtained with relative uncertainties small enough to permit discriminating among simulations performed with different Monte Carlo codes and phantoms. The methodology described in the present work can be employed to calculate specific absorbed fractions for arbitrary arrangements, i.e. energy spectrum of primary radiation, phantom model and source and target organs. PMID
Regression between earthquake magnitudes having errors with known variances
NASA Astrophysics Data System (ADS)
Pujol, Jose
2016-07-01
Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.
Multi-observable Uncertainty Relations in Product Form of Variances
Qin, Hui-Hui; Fei, Shao-Ming; Li-Jost, Xianqing
2016-01-01
We investigate the product form uncertainty relations of variances for n (n ≥ 3) quantum observables. In particular, tight uncertainty relations satisfied by three observables has been derived, which is shown to be better than the ones derived from the strengthened Heisenberg and the generalized Schrödinger uncertainty relations, and some existing uncertainty relation for three spin-half operators. Uncertainty relation of arbitrary number of observables is also derived. As an example, the uncertainty relation satisfied by the eight Gell-Mann matrices is presented. PMID:27498851
Multi-observable Uncertainty Relations in Product Form of Variances.
Qin, Hui-Hui; Fei, Shao-Ming; Li-Jost, Xianqing
2016-01-01
We investigate the product form uncertainty relations of variances for n (n ≥ 3) quantum observables. In particular, tight uncertainty relations satisfied by three observables has been derived, which is shown to be better than the ones derived from the strengthened Heisenberg and the generalized Schrödinger uncertainty relations, and some existing uncertainty relation for three spin-half operators. Uncertainty relation of arbitrary number of observables is also derived. As an example, the uncertainty relation satisfied by the eight Gell-Mann matrices is presented. PMID:27498851
Self-Tuning Continuous-Time Generalized Minimum Variance Control
NASA Astrophysics Data System (ADS)
Hoshino, Ryota; Mori, Yasuchika
The generalized minimum variance control (GMVC) is one of the design methods of self-tuning control (STC). In general, STC is applied as a discrete-time (DT) design technique. However, by some selection of the sampling period, the DT design technique has possibilities of generating unstable zeros and time-delays, and of failing in getting a clear grasp of the controlled object. For this reason, we propose a continuous-time (CT) design technique of GMVC, which we call CGMVC. In this paper, we confirm some advantages of CGMVC, and provide a numerical example.
Simulation Study Using a New Type of Sample Variance
NASA Technical Reports Server (NTRS)
Howe, D. A.; Lainson, K. J.
1996-01-01
We evaluate with simulated data a new type of sample variance for the characterization of frequency stability. The new statistic (referred to as TOTALVAR and its square root TOTALDEV) is a better predictor of long-term frequency variations than the present sample Allan deviation. The statistical model uses the assumption that a time series of phase or frequency differences is wrapped (periodic) with overall frequency difference removed. We find that the variability at long averaging times is reduced considerably for the five models of power-law noise commonly encountered with frequency standards and oscillators.
Analysis of variance tables based on experimental structure.
Brien, C J
1983-03-01
A stepwise procedure for obtaining the experimental structure for a particular experiment is presented together with rules for deriving the analysis-of-variance table from that structure. The procedure involves the division of the factors into groups and is essentially a generalization of the method of Nelder (1965, Proceedings of the Royal Society, Series A 283, 147-162; 1965, Proceedings of the Royal Society, Series A 283, 163-178), to what are termed 'multi-tiered' experiments. The proposed method is illustrated for a wine-tasting experiment. PMID:6871362
Analysis of variance of thematic mapping experiment data.
Rosenfield, G.H.
1981-01-01
As an example of the methodology, data from an experiment using three scales of land-use and land-cover mapping have been analyzed. The binomial proportions of correct interpretations have been analyzed untransformed and transformed by both the arcsine and the logit transformations. A weighted analysis of variance adjustment has been used. There is evidence of a significant difference among the three scales of mapping (1:24 000, 1:100 000 and 1:250 000) using the transformed data. Multiple range tests showed that all three scales are different for the arcsine transformed data. - from Author
AVATAR -- Automatic variance reduction in Monte Carlo calculations
Van Riper, K.A.; Urbatsch, T.J.; Soran, P.D.
1997-05-01
AVATAR{trademark} (Automatic Variance And Time of Analysis Reduction), accessed through the graphical user interface application, Justine{trademark}, is a superset of MCNP{trademark} that automatically invokes THREEDANT{trademark} for a three-dimensional deterministic adjoint calculation on a mesh independent of the Monte Carlo geometry, calculates weight windows, and runs MCNP. Computational efficiency increases by a factor of 2 to 5 for a three-detector oil well logging tool model. Human efficiency increases dramatically, since AVATAR eliminates the need for deep intuition and hours of tedious handwork.
Variance and bias computation for enhanced system identification
NASA Technical Reports Server (NTRS)
Bergmann, Martin; Longman, Richard W.; Juang, Jer-Nan
1989-01-01
A study is made of the use of a series of variance and bias confidence criteria recently developed for the eigensystem realization algorithm (ERA) identification technique. The criteria are shown to be very effective, not only for indicating the accuracy of the identification results (especially in terms of confidence intervals), but also for helping the ERA user to obtain better results. They help determine the best sample interval, the true system order, how much data to use and whether to introduce gaps in the data used, what dimension Hankel matrix to use, and how to limit the bias or correct for bias in the estimates.
Variance reduction in Monte Carlo analysis of rarefied gas diffusion
NASA Technical Reports Server (NTRS)
Perlmutter, M.
1972-01-01
The present analysis uses the Monte Carlo method to solve the problem of rarefied diffusion between parallel walls. The diffusing molecules are evaporated or emitted from one of two parallel walls and diffused through another molecular species. The analysis treats the diffusing molecule as undergoing a Markov random walk and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs the expected Markov walk payoff is retained but its variance is reduced so that the M. C. result has a much smaller error.
Two-dimensional finite-element temperature variance analysis
NASA Technical Reports Server (NTRS)
Heuser, J. S.
1972-01-01
The finite element method is extended to thermal analysis by forming a variance analysis of temperature results so that the sensitivity of predicted temperatures to uncertainties in input variables is determined. The temperature fields within a finite number of elements are described in terms of the temperatures of vertices and the variational principle is used to minimize the integral equation describing thermal potential energy. A computer calculation yields the desired solution matrix of predicted temperatures and provides information about initial thermal parameters and their associated errors. Sample calculations show that all predicted temperatures are most effected by temperature values along fixed boundaries; more accurate specifications of these temperatures reduce errors in thermal calculations.
Variance reduction in Monte Carlo analysis of rarefied gas diffusion.
NASA Technical Reports Server (NTRS)
Perlmutter, M.
1972-01-01
The problem of rarefied diffusion between parallel walls is solved using the Monte Carlo method. The diffusing molecules are evaporated or emitted from one of the two parallel walls and diffuse through another molecular species. The Monte Carlo analysis treats the diffusing molecule as undergoing a Markov random walk, and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs, the expected Markov walk payoff is retained but its variance is reduced so that the Monte Carlo result has a much smaller error.
Variances of Cylinder Parameters Fitted to Range Data
Franaszek, Marek
2012-01-01
Industrial pipelines are frequently scanned with 3D imaging systems (e.g., LADAR) and cylinders are fitted to the collected data. Then, the fitted as-built model is compared with the as-designed model. Meaningful comparison between the two models requires estimates of uncertainties of fitted model parameters. In this paper, the formulas for variances of cylinder parameters fitted with Nonlinear Least Squares to a point cloud acquired from one scanning position are derived. Two different error functions used in minimization are discussed: the orthogonal and the directional function. Derived formulas explain how some uncertainty components are propagated from measured ranges to fitted cylinder parameters. PMID:26900527
Multi-observable Uncertainty Relations in Product Form of Variances
NASA Astrophysics Data System (ADS)
Qin, Hui-Hui; Fei, Shao-Ming; Li-Jost, Xianqing
2016-08-01
We investigate the product form uncertainty relations of variances for n (n ≥ 3) quantum observables. In particular, tight uncertainty relations satisfied by three observables has been derived, which is shown to be better than the ones derived from the strengthened Heisenberg and the generalized Schrödinger uncertainty relations, and some existing uncertainty relation for three spin-half operators. Uncertainty relation of arbitrary number of observables is also derived. As an example, the uncertainty relation satisfied by the eight Gell-Mann matrices is presented.
Unique features of trabectedin mechanism of action.
Larsen, Annette K; Galmarini, Carlos M; D'Incalci, Maurizio
2016-04-01
Trabectedin (Yondelis(®), ET-743) is a marine-derived natural product that was initially isolated from the marine ascidian Ecteinascidia turbinata and is currently prepared synthetically. Trabectedin is used as a single agent for the treatment of patients with soft tissue sarcoma after failure of doxorubicin or ifosfamide or who are unsuited to receive these agents, and in patients with relapsed, platinum-sensitive ovarian cancer in combination with pegylated liposomal doxorubicin. Trabectedin presents a complex mechanism of action affecting key cell biology processes in tumor cells as well as in the tumor microenvironment. The inhibition of trans-activated transcription and the interaction with DNA repair proteins appear as a hallmark of the antiproliferative activity of trabectedin. Inhibition of active transcription is achieved by an initial direct mechanism that involves interaction with RNA polymerase II, thereby inducing its ubiquitination and degradation by the proteasome. This subsequently modulates the production of cytokines and chemokines by tumor and tumor-associated macrophages. Another interesting effect on activated transcription is mediated by the displacement of oncogenic transcription factors from their target promoters, thereby affecting oncogenic signaling addiction. In addition, it is well established that DNA repair systems including transcription-coupled nucleotide excision repair and homologous recombination play a role in the antitumor activity of trabectedin. Ongoing studies are currently addressing how to exploit these unique mechanistic features of trabectedin to combine this agent either with immunological or microenvironmental modulators or with classical chemotherapeutic agents in a more rational manner. PMID:26666647
Unique Ganglioside Recognition Strategies for Clostridial Neurotoxins
Benson, Marc A.; Fu, Zhuji; Kim, Jung-Ja P.; Baldwin, Michael R.
2012-03-15
Botulinum neurotoxins (BoNTs) and tetanus neurotoxin are the causative agents of the paralytic diseases botulism and tetanus, respectively. The potency of the clostridial neurotoxins (CNTs) relies primarily on their highly specific binding to nerve terminals and cleavage of SNARE proteins. Although individual CNTs utilize distinct proteins for entry, they share common ganglioside co-receptors. Here, we report the crystal structure of the BoNT/F receptor-binding domain in complex with the sugar moiety of ganglioside GD1a. GD1a binds in a shallow groove formed by the conserved peptide motif E ... H ... SXWY ... G, with additional stabilizing interactions provided by two arginine residues. Comparative analysis of BoNT/F with other CNTs revealed several differences in the interactions of each toxin with ganglioside. Notably, exchange of BoNT/F His-1241 with the corresponding lysine residue of BoNT/E resulted in increased affinity for GD1a and conferred the ability to bind ganglioside GM1a. Conversely, BoNT/E was not able to bind GM1a, demonstrating a discrete mechanism of ganglioside recognition. These findings provide a structural basis for ganglioside binding among the CNTs and show that individual toxins utilize unique ganglioside recognition strategies.
Weverling, Gerrit-Jan; de Wolf, Frank; Anderson, Roy M.
2016-01-01
Background About 90% of drugs fail in clinical development. The question is whether trials fail because of insufficient efficacy of the new treatment, or rather because of poor trial design that is unable to detect the true efficacy. The variance of the measured endpoints is a major, largely underestimated source of uncertainty in clinical trial design, particularly in acute viral infections. We use a clinical trial simulator to demonstrate how a thorough consideration of the variability inherent in clinical trials of novel therapies for acute viral infections can improve trial design. Methods and Findings We developed a clinical trial simulator to analyse the impact of three different types of variation on the outcome of a challenge study of influenza treatments for infected patients, including individual patient variability in the response to the drug, the variance of the measurement procedure, and the variance of the lower limit of quantification of endpoint measurements. In addition, we investigated the impact of protocol variation on clinical trial outcome. We found that the greatest source of variance was inter-individual variability in the natural course of infection. Running a larger phase II study can save up to $38 million, if an unlikely to succeed phase III trial is avoided. In addition, low-sensitivity viral load assays can lead to falsely negative trial outcomes. Conclusions Due to high inter-individual variability in natural infection, the most important variable in clinical trial design for challenge studies of potential novel influenza treatments is the number of participants. 100 participants are preferable over 50. Using more sensitive viral load assays increases the probability of a positive trial outcome, but may in some circumstances lead to false positive outcomes. Clinical trial simulations are powerful tools to identify the most important sources of variance in clinical trials and thereby help improve trial design. PMID:27332704
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Sig-NganLam, Nina; Quattrochi, Dale A.
2004-01-01
The accuracy of traditional multispectral maximum-likelihood image classification is limited by the skewed statistical distributions of reflectances from the complex heterogenous mixture of land cover types in urban areas. This work examines the utility of local variance, fractal dimension and Moran's I index of spatial autocorrelation in segmenting multispectral satellite imagery. Tools available in the Image Characterization and Modeling System (ICAMS) were used to analyze Landsat 7 imagery of Atlanta, Georgia. Although segmentation of panchromatic images is possible using indicators of spatial complexity, different land covers often yield similar values of these indices. Better results are obtained when a surface of local fractal dimension or spatial autocorrelation is combined as an additional layer in a supervised maximum-likelihood multispectral classification. The addition of fractal dimension measures is particularly effective at resolving land cover classes within urbanized areas, as compared to per-pixel spectral classification techniques.
[The correlations between psychological indices and cardiac variance].
Nikolova, R; Danev, S; Amudzhev, P; Datsov, E
1995-01-01
Correlative links between psychologic and psychologic indicators were studied in subjects occupied either in airline transportation or in the chemical industry. Investigations covered three groups of persons: managers of airline traffic (57 subjects); workers at "Vratsa" Chemical plant (14 subjects); and operators at "Vratsa" Chemical plant (14 subjects). The psychologic parameters measured included indicators of cardiac variance: mean--mean value of successive cardiac intervals, SD--standard deviation of mean value of cardiac intervals (R-R), AMo--amplitude of the mode, HI--homeostatic index, Pt--spectral power of R-R related to thermoregulation, Pp--spectral power of R-R related to respiration, IBO--index of centralization; psychologic parameters included: extrovertiveness, introvertiveness, neuroticism, psychoticism, interpersonality conflicts, self-control, social support, self-confidence, work satisfaction, psychosomatic complaints. There was evidence of significant and highly significant correlative links between indicators of cardiac variance and psychologic indicators. There thus appeared to exist certain relationships between the psychologic and psychologic levels during lengthy stressful occupational exposure. PMID:8524754
Cosmic variance of the spectral index from mode coupling
Bramante, Joseph; Kumar, Jason; Nelson, Elliot; Shandera, Sarah E-mail: jkumar@hawaii.edu E-mail: shandera@gravity.psu.edu
2013-11-01
We demonstrate that local, scale-dependent non-Gaussianity can generate cosmic variance uncertainty in the observed spectral index of primordial curvature perturbations. In a universe much larger than our current Hubble volume, locally unobservable long wavelength modes can induce a scale-dependence in the power spectrum of typical subvolumes, so that the observed spectral index varies at a cosmologically significant level (|Δn{sub s}| ∼ O(0.04)). Similarly, we show that the observed bispectrum can have an induced scale dependence that varies about the global shape. If tensor modes are coupled to long wavelength modes of a second field, the locally observed tensor power and spectral index can also vary. All of these effects, which can be introduced in models where the observed non-Gaussianity is consistent with bounds from the Planck satellite, loosen the constraints that observations place on the parameters of theories of inflation with mode coupling. We suggest observational constraints that future measurements could aim for to close this window of cosmic variance uncertainty.
Concentration variance decay during magma mixing: a volcanic chronometer
NASA Astrophysics Data System (ADS)
Perugini, Diego; de Campos, Cristina P.; Petrelli, Maurizio; Dingwell, Donald B.
2015-09-01
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical “mixing to eruption” time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest.
PET image reconstruction: mean, variance, and optimal minimax criterion
NASA Astrophysics Data System (ADS)
Liu, Huafeng; Gao, Fei; Guo, Min; Xue, Liying; Nie, Jing; Shi, Pengcheng
2015-04-01
Given the noise nature of positron emission tomography (PET) measurements, it is critical to know the image quality and reliability as well as expected radioactivity map (mean image) for both qualitative interpretation and quantitative analysis. While existing efforts have often been devoted to providing only the reconstructed mean image, we present a unified framework for joint estimation of the mean and corresponding variance of the radioactivity map based on an efficient optimal min-max criterion. The proposed framework formulates the PET image reconstruction problem to be a transformation from system uncertainties to estimation errors, where the minimax criterion is adopted to minimize the estimation errors with possibly maximized system uncertainties. The estimation errors, in the form of a covariance matrix, express the measurement uncertainties in a complete way. The framework is then optimized by ∞-norm optimization and solved with the corresponding H∞ filter. Unlike conventional statistical reconstruction algorithms, that rely on the statistical modeling methods of the measurement data or noise, the proposed joint estimation stands from the point of view of signal energies and can handle from imperfect statistical assumptions to even no a priori statistical assumptions. The performance and accuracy of reconstructed mean and variance images are validated using Monte Carlo simulations. Experiments on phantom scans with a small animal PET scanner and real patient scans are also conducted for assessment of clinical potential.
Wavelet-Variance-Based Estimation for Composite Stochastic Processes.
Guerrier, Stéphane; Skaloud, Jan; Stebler, Yannick; Victoria-Feser, Maria-Pia
2013-09-01
This article presents a new estimation method for the parameters of a time series model. We consider here composite Gaussian processes that are the sum of independent Gaussian processes which, in turn, explain an important aspect of the time series, as is the case in engineering and natural sciences. The proposed estimation method offers an alternative to classical estimation based on the likelihood, that is straightforward to implement and often the only feasible estimation method with complex models. The estimator furnishes results as the optimization of a criterion based on a standardized distance between the sample wavelet variances (WV) estimates and the model-based WV. Indeed, the WV provides a decomposition of the variance process through different scales, so that they contain the information about different features of the stochastic model. We derive the asymptotic properties of the proposed estimator for inference and perform a simulation study to compare our estimator to the MLE and the LSE with different models. We also set sufficient conditions on composite models for our estimator to be consistent, that are easy to verify. We use the new estimator to estimate the stochastic error's parameters of the sum of three first order Gauss-Markov processes by means of a sample of over 800,000 issued from gyroscopes that compose inertial navigation systems. Supplementary materials for this article are available online. PMID:24174689
Wavelet-Variance-Based Estimation for Composite Stochastic Processes
Guerrier, Stéphane; Skaloud, Jan; Stebler, Yannick; Victoria-Feser, Maria-Pia
2013-01-01
This article presents a new estimation method for the parameters of a time series model. We consider here composite Gaussian processes that are the sum of independent Gaussian processes which, in turn, explain an important aspect of the time series, as is the case in engineering and natural sciences. The proposed estimation method offers an alternative to classical estimation based on the likelihood, that is straightforward to implement and often the only feasible estimation method with complex models. The estimator furnishes results as the optimization of a criterion based on a standardized distance between the sample wavelet variances (WV) estimates and the model-based WV. Indeed, the WV provides a decomposition of the variance process through different scales, so that they contain the information about different features of the stochastic model. We derive the asymptotic properties of the proposed estimator for inference and perform a simulation study to compare our estimator to the MLE and the LSE with different models. We also set sufficient conditions on composite models for our estimator to be consistent, that are easy to verify. We use the new estimator to estimate the stochastic error's parameters of the sum of three first order Gauss-Markov processes by means of a sample of over 800,000 issued from gyroscopes that compose inertial navigation systems. Supplementary materials for this article are available online. PMID:24174689
Minimum variance brain source localization for short data sequences.
Ravan, Maryam; Reilly, James P; Hasey, Gary
2014-02-01
In the electroencephalogram (EEG) or magnetoencephalogram (MEG) context, brain source localization methods that rely on estimating second-order statistics often fail when the number of samples of the recorded data sequences is small in comparison to the number of electrodes. This condition is particularly relevant when measuring evoked potentials. Due to the correlated background EEG/MEG signal, an adaptive approach to localization is desirable. Previous work has addressed these issues by reducing the adaptive degrees of freedom (DoFs). This reduction results in decreased resolution and accuracy of the estimated source configuration. This paper develops and tests a new multistage adaptive processing technique based on the minimum variance beamformer for brain source localization that has been previously used in the radar statistical signal processing context. This processing, referred to as the fast fully adaptive (FFA) approach, can significantly reduce the required sample support, while still preserving all available DoFs. To demonstrate the performance of the FFA approach in the limited data scenario, simulation and experimental results are compared with two previous beamforming approaches; i.e., the fully adaptive minimum variance beamforming method and the beamspace beamforming method. Both simulation and experimental results demonstrate that the FFA method can localize all types of brain activity more accurately than the other approaches with limited data. PMID:24108457
Cosmic variance of the spectral index from mode coupling
NASA Astrophysics Data System (ADS)
Bramante, Joseph; Kumar, Jason; Nelson, Elliot; Shandera, Sarah
2013-11-01
We demonstrate that local, scale-dependent non-Gaussianity can generate cosmic variance uncertainty in the observed spectral index of primordial curvature perturbations. In a universe much larger than our current Hubble volume, locally unobservable long wavelength modes can induce a scale-dependence in the power spectrum of typical subvolumes, so that the observed spectral index varies at a cosmologically significant level (|Δns| ~ Script O(0.04)). Similarly, we show that the observed bispectrum can have an induced scale dependence that varies about the global shape. If tensor modes are coupled to long wavelength modes of a second field, the locally observed tensor power and spectral index can also vary. All of these effects, which can be introduced in models where the observed non-Gaussianity is consistent with bounds from the Planck satellite, loosen the constraints that observations place on the parameters of theories of inflation with mode coupling. We suggest observational constraints that future measurements could aim for to close this window of cosmic variance uncertainty.
Concentration variance decay during magma mixing: a volcanic chronometer
Perugini, Diego; De Campos, Cristina P.; Petrelli, Maurizio; Dingwell, Donald B.
2015-01-01
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing – a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical “mixing to eruption” time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest. PMID:26387555
Hydraulic geometry of river cross sections; theory of minimum variance
Williams, Garnett P.
1978-01-01
This study deals with the rates at which mean velocity, mean depth, and water-surface width increase with water discharge at a cross section on an alluvial stream. Such relations often follow power laws, the exponents in which are called hydraulic exponents. The Langbein (1964) minimum-variance theory is examined in regard to its validity and its ability to predict observed hydraulic exponents. The variables used with the theory were velocity, depth, width, bed shear stress, friction factor, slope (energy gradient), and stream power. Slope is often constant, in which case only velocity, depth, width, shear and friction factor need be considered. The theory was tested against a wide range of field data from various geographic areas of the United States. The original theory was intended to produce only the average hydraulic exponents for a group of cross sections in a similar type of geologic or hydraulic environment. The theory does predict these average exponents with a reasonable degree of accuracy. An attempt to forecast the exponents at any selected cross section was moderately successful. Empirical equations are more accurate than the minimum variance, Gauckler-Manning, or Chezy methods. Predictions of the exponent of width are most reliable, the exponent of depth fair, and the exponent of mean velocity poor. (Woodard-USGS)
Stochastic Mixing Model with Power Law Decay of Variance
NASA Technical Reports Server (NTRS)
Fedotov, S.; Ihme, M.; Pitsch, H.
2003-01-01
Here we present a simple stochastic mixing model based on the law of large numbers (LLN). The reason why the LLN is involved in our formulation of the mixing problem is that the random conserved scalar c = c(t,x(t)) appears to behave as a sample mean. It converges to the mean value mu, while the variance sigma(sup 2)(sub c) (t) decays approximately as t(exp -1). Since the variance of the scalar decays faster than a sample mean (typically is greater than unity), we will introduce some non-linear modifications into the corresponding pdf-equation. The main idea is to develop a robust model which is independent from restrictive assumptions about the shape of the pdf. The remainder of this paper is organized as follows. In Section 2 we derive the integral equation from a stochastic difference equation describing the evolution of the pdf of a passive scalar in time. The stochastic difference equation introduces an exchange rate gamma(sub n) which we model in a first step as a deterministic function. In a second step, we generalize gamma(sub n) as a stochastic variable taking fluctuations in the inhomogeneous environment into account. In Section 3 we solve the non-linear integral equation numerically and analyze the influence of the different parameters on the decay rate. The paper finishes with a conclusion.
Implications and applications of the variance-based uncertainty equalities
NASA Astrophysics Data System (ADS)
Yao, Yao; Xiao, Xing; Wang, Xiaoguang; Sun, C. P.
2015-06-01
In quantum mechanics, the variance-based Heisenberg-type uncertainty relations are a series of mathematical inequalities posing the fundamental limits on the achievable accuracy of the state preparations. In contrast, we construct and formulate two quantum uncertainty equalities, which hold for all pairs of incompatible observables and indicate the new uncertainty relations recently introduced by L. Maccone and A. K. Pati [Phys. Rev. Lett. 113, 260401 (2014), 10.1103/PhysRevLett.113.260401]. In fact, we obtain a series of inequalities with hierarchical structure, including the Maccone-Pati's inequalities as a special (weakest) case. Furthermore, we present an explicit interpretation lying behind the derivations and relate these relations to the so-called intelligent states. As an illustration, we investigate the properties of these uncertainty inequalities in the qubit system and a state-independent bound is obtained for the sum of variances. Finally, we apply these inequalities to the spin squeezing scenario and its implication in interferometric sensitivity is also discussed.
Variance of the Quantum Dwell Time for a Nonrelativistic Particle
NASA Technical Reports Server (NTRS)
Hahne, Gerhard
2012-01-01
Munoz, Seidel, and Muga [Phys. Rev. A 79, 012108 (2009)], following an earlier proposal by Pollak and Miller [Phys. Rev. Lett. 53, 115 (1984)] in the context of a theory of a collinear chemical reaction, showed that suitable moments of a two-flux correlation function could be manipulated to yield expressions for the mean quantum dwell time and mean square quantum dwell time for a structureless particle scattering from a time-independent potential energy field between two parallel lines in a two-dimensional spacetime. The present work proposes a generalization to a charged, nonrelativistic particle scattering from a transient, spatially confined electromagnetic vector potential in four-dimensional spacetime. The geometry of the spacetime domain is that of the slab between a pair of parallel planes, in particular those defined by constant values of the third (z) spatial coordinate. The mean Nth power, N = 1, 2, 3, . . ., of the quantum dwell time in the slab is given by an expression involving an N-flux-correlation function. All these means are shown to be nonnegative. The N = 1 formula reduces to an S-matrix result published previously [G. E. Hahne, J. Phys. A 36, 7149 (2003)]; an explicit formula for N = 2, and of the variance of the dwell time in terms of the S-matrix, is worked out. A formula representing an incommensurability principle between variances of the output-minus-input flux of a pair of dynamical variables (such as the particle s time flux and others) is derived.
Cosmic Variance in the Nanohertz Gravitational Wave Background
NASA Astrophysics Data System (ADS)
Roebber, Elinore; Holder, Gilbert; Holz, Daniel E.; Warren, Michael
2016-03-01
We use large N-body simulations and empirical scaling relations between dark matter halos, galaxies, and supermassive black holes (SMBBHs) to estimate the formation rates of SMBBH binaries and the resulting low-frequency stochastic gravitational wave background (GWB). We find this GWB to be relatively insensitive (≲ 10%) to cosmological parameters, with only slight variation between wmap5 and Planck cosmologies. We find that uncertainty in the astrophysical scaling relations changes the amplitude of the GWB by a factor of ∼2. Current observational limits are already constraining this predicted range of models. We investigate the Poisson variance in the amplitude of the GWB for randomly generated populations of SMBBHs, finding a scatter of order unity per frequency bin below 10 nHz, and increasing to a factor of ∼10 near 100 nHz. This variance is a result of the rarity of the most massive binaries, which dominate the signal, and acts as a fundamental uncertainty on the amplitude of the underlying power law spectrum. This Poisson uncertainty dominates at ≳ 20 nHz, while at lower frequencies the dominant uncertainty is related to our poor understanding of the astrophysical scaling relations, although very low frequencies may be dominated by uncertainties related to the final parsec problem and the processes which drive binaries to the gravitational wave dominated regime. Cosmological effects are negligible at all frequencies.
VARIANCE ESTIMATION IN DOMAIN DECOMPOSED MONTE CARLO EIGENVALUE CALCULATIONS
Mervin, Brenden T; Maldonado, G. Ivan; Mosher, Scott W; Evans, Thomas M; Wagner, John C
2012-01-01
The number of tallies performed in a given Monte Carlo calculation is limited in most modern Monte Carlo codes by the amount of memory that can be allocated on a single processor. By using domain decomposition, the calculation is now limited by the total amount of memory available on all processors, allowing for significantly more tallies to be performed. However, decomposing the problem geometry introduces significant issues with the way tally statistics are conventionally calculated. In order to deal with the issue of calculating tally variances in domain decomposed environments for the Shift hybrid Monte Carlo code, this paper presents an alternative approach for reactor scenarios in which an assumption is made that once a particle leaves a domain, it does not reenter the domain. Particles that reenter the domain are instead treated as separate independent histories. This assumption introduces a bias that inevitably leads to under-prediction of the calculated variances for tallies within a few mean free paths of the domain boundaries. However, through the use of different decomposition strategies, primarily overlapping domains, the negative effects of such an assumption can be significantly reduced to within reasonable levels.
Concentration variance decay during magma mixing: a volcanic chronometer.
Perugini, Diego; De Campos, Cristina P; Petrelli, Maurizio; Dingwell, Donald B
2015-01-01
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical "mixing to eruption" time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest. PMID:26387555
Variational Study of SU(3) Gauge Theory by Stationary Variance
NASA Astrophysics Data System (ADS)
Siringo, Fabio
2015-07-01
The principle of stationary variance is advocated as a viable variational approach to gauge theories. The method can be regarded as a second-order extension of the Gaussian Effective Potential (GEP) and seems to be suited for describing the strong-coupling limit of non-Abelian gauge theories. The single variational parameter of the GEP is replaced by trial unknown two-point functions, with infinite variational parameters to be optimized by the solution of a set of coupled integral equations. The stationary conditions can be easily derived by the self-energy, without having to write the effective potential, making use of a general relation between self-energy and functional derivatives that has been proven to any order. The low- energy limit of pure Yang-Mills SU(3) gauge theory has been studied in Feynman gauge, and the stationary equations are written as integral equations for the gluon and ghost propagators. A physically sensible solution is found for any strength of the coupling. The gluon propagator is finite in the infrared, with a dynamical mass that decreases as a power at high energies. At variance with some recent findings in Feynman gauge, the ghost dressing function does not vanish in the infrared limit and a decoupling scenario emerges as recently reported for the Landau gauge.
Concentration variance decay during magma mixing: a volcanic chronometer
NASA Astrophysics Data System (ADS)
Perugini, D.; De Campos, C. P.; Petrelli, M.; Dingwell, D. B.
2015-12-01
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical "mixing to eruption" time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest.
Constructing Dense Graphs with Unique Hamiltonian Cycles
ERIC Educational Resources Information Center
Lynch, Mark A. M.
2012-01-01
It is not difficult to construct dense graphs containing Hamiltonian cycles, but it is difficult to generate dense graphs that are guaranteed to contain a unique Hamiltonian cycle. This article presents an algorithm for generating arbitrarily large simple graphs containing "unique" Hamiltonian cycles. These graphs can be turned into dense graphs…
Teaching and Learning with Individually Unique Exercises
ERIC Educational Resources Information Center
Joerding, Wayne
2010-01-01
In this article, the author describes the pedagogical benefits of giving students individually unique homework exercises from an exercise template. Evidence from a test of this approach shows statistically significant improvements in subsequent exam performance by students receiving unique problems compared with students who received traditional…
40 CFR 142.301 - What is a small system variance?
Code of Federal Regulations, 2010 CFR
2010-07-01
....301 Section 142.301 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System... issuance of variances from the requirement to comply with a maximum contaminant level or...
40 CFR 142.301 - What is a small system variance?
Code of Federal Regulations, 2014 CFR
2014-07-01
....301 Section 142.301 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System... issuance of variances from the requirement to comply with a maximum contaminant level or...
40 CFR 142.301 - What is a small system variance?
Code of Federal Regulations, 2012 CFR
2012-07-01
....301 Section 142.301 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System... issuance of variances from the requirement to comply with a maximum contaminant level or...
40 CFR 142.301 - What is a small system variance?
Code of Federal Regulations, 2013 CFR
2013-07-01
....301 Section 142.301 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System... issuance of variances from the requirement to comply with a maximum contaminant level or...
40 CFR 142.301 - What is a small system variance?
Code of Federal Regulations, 2011 CFR
2011-07-01
....301 Section 142.301 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System... issuance of variances from the requirement to comply with a maximum contaminant level or...
NASA Astrophysics Data System (ADS)
Maboodi, Mohsen; Camacho, Eduardo F.; Khaki-Sedigh, Ali
2015-10-01
This paper presents a non-linear generalised minimum variance (NGMV) controller for a second-order Volterra series model with a general linear additive disturbance. The Volterra series models provide a natural extension of a linear convolution model with the nonlinearity considered in an additive term. The design procedure is entirely carried out in the state space framework, which facilitates the application of other analysis and design methods in this framework. First, the non-linear minimum variance (NMV) controller is introduced and then by changing the cost function, NGMV controller is defined as an extended version of the linear cases. The cost function is used in the simplest form and can be easily extended to the general case. Simulation results show the effectiveness of the proposed non-linear method.
NASA Astrophysics Data System (ADS)
Cheng, Ren-Xiang; Chao, Tao; Xiao-Jun, Liu
2015-11-01
The speed-of-sound variance will decrease the imaging quality of photoacoustic tomography in acoustically inhomogeneous tissue. In this study, ultrasound computed tomography is combined with photoacoustic tomography to enhance the photoacoustic tomography in this situation. The speed-of-sound information is recovered by ultrasound computed tomography. Then, an improved delay-and-sum method is used to reconstruct the image from the photoacoustic signals. The simulation results validate that the proposed method can obtain a better photoacoustic tomography than the conventional method when the speed-of-sound variance is increased. In addition, the influences of the speed-of-sound variance and the fan-angle on the image quality are quantitatively explored to optimize the image scheme. The proposed method has a good performance even when the speed-of-sound variance reaches 14.2%. Furthermore, an optimized fan angle is revealed, which can keep the good image quality with a low cost of hardware. This study has a potential value in extending the biomedical application of photoacoustic tomography. Projection supported by the National Basic Research Program of China (Grant No. 2012CB921504), the National Natural Science Foundation of China (Grant Nos. 11422439, 11274167, and 11274171), and the Specialized Research Fund for the Doctoral Program of Higher Education, China (Grant No. 20120091110001).
Unique Challenges Testing SDRs for Space
NASA Technical Reports Server (NTRS)
Johnson, Sandra; Chelmins, David; Downey, Joseph; Nappier, Jennifer
2013-01-01
This paper describes the approach used by the Space Communication and Navigation (SCaN) Testbed team to qualify three Software Defined Radios (SDR) for operation in space and the characterization of the platform to enable upgrades on-orbit. The three SDRs represent a significant portion of the new technologies being studied on board the SCAN Testbed, which is operating on an external truss on the International Space Station (ISS). The SCaN Testbed provides experimenters an opportunity to develop and demonstrate experimental waveforms and applications for communication, networking, and navigation concepts and advance the understanding of developing and operating SDRs in space. Qualifying a Software Defined Radio for the space environment requires additional consideration versus a hardware radio. Tests that incorporate characterization of the platform to provide information necessary for future waveforms, which might exercise extended capabilities of the hardware, are needed. The development life cycle for the radio follows the software development life cycle, where changes can be incorporated at various stages of development and test. It also enables flexibility to be added with minor additional effort. Although this provides tremendous advantages, managing the complexity inherent in a software implementation requires a testing beyond the traditional hardware radio test plan. Due to schedule and resource limitations and parallel development activities, the subsystem testing of the SDRs at the vendor sites was primarily limited to typical fixed transceiver type of testing. NASA's Glenn Research Center (GRC) was responsible for the integration and testing of the SDRs into the SCaN Testbed system and conducting the investigation of the SDR to advance the technology to be accepted by missions. This paper will describe the unique tests that were conducted at both the subsystem and system level, including environmental testing, and present results. For example, test
Unique Challenges Testing SDRs for Space
NASA Technical Reports Server (NTRS)
Chelmins, David; Downey, Joseph A.; Johnson, Sandra K.; Nappier, Jennifer M.
2013-01-01
This paper describes the approach used by the Space Communication and Navigation (SCaN) Testbed team to qualify three Software Defined Radios (SDR) for operation in space and the characterization of the platform to enable upgrades on-orbit. The three SDRs represent a significant portion of the new technologies being studied on board the SCAN Testbed, which is operating on an external truss on the International Space Station (ISS). The SCaN Testbed provides experimenters an opportunity to develop and demonstrate experimental waveforms and applications for communication, networking, and navigation concepts and advance the understanding of developing and operating SDRs in space. Qualifying a Software Defined Radio for the space environment requires additional consideration versus a hardware radio. Tests that incorporate characterization of the platform to provide information necessary for future waveforms, which might exercise extended capabilities of the hardware, are needed. The development life cycle for the radio follows the software development life cycle, where changes can be incorporated at various stages of development and test. It also enables flexibility to be added with minor additional effort. Although this provides tremendous advantages, managing the complexity inherent in a software implementation requires a testing beyond the traditional hardware radio test plan. Due to schedule and resource limitations and parallel development activities, the subsystem testing of the SDRs at the vendor sites was primarily limited to typical fixed transceiver type of testing. NASA s Glenn Research Center (GRC) was responsible for the integration and testing of the SDRs into the SCaN Testbed system and conducting the investigation of the SDR to advance the technology to be accepted by missions. This paper will describe the unique tests that were conducted at both the subsystem and system level, including environmental testing, and present results. For example, test
Understanding the influence of watershed storage caused by human interferences on ET variance
NASA Astrophysics Data System (ADS)
Zeng, R.; Cai, X.
2014-12-01
Understanding the temporal variance of evapotranspiration (ET) at the watershed scale remains a challenging task, because it is affected by complex climate conditions, soil properties, vegetation, groundwater and human activities. In a changing environment with extensive and intensive human interferences, understanding ET variance and its factors is important for sustainable water resources management. This study presents an analysis of the effect of storage change caused by human activities on ET variance Irrigation usually filters ET variance through the use of surface and groundwater; however, over-amount irrigation may cause the depletion of watershed storage, which changes the coincidence of water availability and energy supply for ET. This study develops a framework by incorporating the water balance and the Budyko Hypothesis. It decomposes the ET variance to the variances of precipitation, potential ET, catchment storage change, and their covariances. The contributions of ET variance from the various components are scaled by some weighting functions, expressed as long-term climate conditions and catchment properties. ET variance is assessed by records from 32 major river basins across the world. It is found that ET variance is dominated by precipitation variance under hot-dry condition and by evaporative demand variance under cool-wet condition; while the coincidence of water and energy supply controls ET variance under moderate climate condition. Watershed storage change plays an increasing important role in determining ET variance with relatively shorter time scale. By incorporating storage change caused by human interferences, this framework corrects the over-estimation of ET variance in hot-dry climate and under-estimation of ET variance in cool-wet climate. Furthermore, classification of dominant factors on ET variance shows similar patterns as geographic zonation.
NASA Astrophysics Data System (ADS)
Zeng, Ruijie; Cai, Ximing
2015-05-01
Understanding the temporal variance of evapotranspiration (ET) at the catchment scale remains a challenging task, because ET variance results from the complex interactions among climate, soil, vegetation, groundwater and human activities. This study extends the framework for ET variance analysis of Koster and Suarez (1999) by incorporating the water balance and the Budyko hypothesis. ET variance is decomposed into the variance/covariance of precipitation, potential ET, and catchment storage change. The contributions to ET variance from those components are quantified by long-term climate conditions (i.e., precipitation and potential ET) and catchment properties through the Budyko equation. It is found that climate determines ET variance under cool-wet, hot-dry and hot-wet conditions; while both catchment storage change and climate together control ET variance under cool-dry conditions. Thus the major factors of ET variance can be categorized based on the conditions of climate and catchment storage change. To demonstrate the analysis, both the inter- and intra-annul ET variances are assessed in the Murray-Darling Basin, and it is found that the framework corrects the over-estimation of ET variance in the arid basin. This study provides an extended theoretical framework to assess ET temporal variance under the impacts from both climate and storage change at the catchment scale.
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2011-01-01
Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…
40 CFR 142.302 - Who can issue a small system variance?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Who can issue a small system variance? 142.302 Section 142.302 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... General Provisions § 142.302 Who can issue a small system variance? A small system variance under...
40 CFR 142.305 - When can a small system variance be granted by a State?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false When can a small system variance be... (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System General Provisions § 142.305 When can a small system variance be granted by a...
40 CFR 124.63 - Procedures for variances when EPA is the permitting authority.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Procedures for variances when EPA is... Permits § 124.63 Procedures for variances when EPA is the permitting authority. (a) In States where EPA is the permit issuing authority and a request for a variance is filed as required by § 122.21,...
31 CFR 15.737-16 - Proof; variance; amendment of pleadings.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Proof; variance; amendment of... POST EMPLOYMENT CONFLICT OF INTEREST Administrative Enforcement Proceedings § 15.737-16 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in a pleading and the...
40 CFR 142.44 - Public hearings on variances and schedules.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Public hearings on variances and... PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.44 Public hearings on variances and schedules. (a) Before...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 3 2010-07-01 2010-07-01 false Variances for delay in contemporaneous... REQUIREMENTS FOR PERMITS FOR SPECIAL CATEGORIES OF MINING § 785.18 Variances for delay in contemporaneous... mining activities where a variance is requested from the contemporaneous reclamation requirements...
42 CFR 456.524 - Notification of Administrator's action and duration of variance.
Code of Federal Regulations, 2010 CFR
2010-10-01
... of variance. 456.524 Section 456.524 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES... Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time Requirements § 456.524 Notification of Administrator's action and duration...