On the Relations among Regular, Equal Unique Variances, and Image Factor Analysis Models.
ERIC Educational Resources Information Center
Hayashi, Kentaro; Bentler, Peter M.
2000-01-01
Investigated the conditions under which the matrix of factor loadings from the factor analysis model with equal unique variances will give a good approximation to the matrix of factor loadings from the regular factor analysis model. Extends the results to the image factor analysis model. Discusses implications for practice. (SLD)
Development of Statistically Parallel Tests by Analysis of Unique Item Variance.
ERIC Educational Resources Information Center
Ree, Malcolm James
A method for developing statistically parallel tests based on the analysis of unique item variance was developed. A test population of 907 basic airmen trainees were required to estimate the angle at which an object in a photograph was viewed, selecting from eight possibilities. A FORTRAN program known as VARSEL was used to rank all the test items…
Culverhouse, Robert C.; Saccone, Nancy L.; Stitzel, Jerry A.; Wang, Jen C.; Steinbach, Joseph H.; Goate, Alison M.; Schwantes-An, Tae-Hwi; Grucza, Richard A.; Stevens, Victoria L.; Bierut, Laura J.
2010-01-01
Results from genome-wide association studies of complex traits account for only a modest proportion of the trait variance predicted to be due to genetics. We hypothesize that joint analysis of polymorphisms may account for more variance. We evaluated this hypothesis on a case–control smoking phenotype by examining pairs of nicotinic receptor single-nucleotide polymorphisms (SNPs) using the Restricted Partition Method (RPM) on data from the Collaborative Genetic Study of Nicotine Dependence (COGEND). We found evidence of joint effects that increase explained variance. Four signals identified in COGEND were testable in independent American Cancer Society (ACS) data, and three of the four signals replicated. Our results highlight two important lessons: joint effects that increase the explained variance are not limited to loci displaying substantial main effects, and joint effects need not display a significant interaction term in a logistic regression model. These results suggest that the joint analyses of variants may indeed account for part of the genetic variance left unexplained by single SNP analyses. Methodologies that limit analyses of joint effects to variants that demonstrate association in single SNP analyses, or require a significant interaction term, will likely miss important joint effects. PMID:21079997
Finger gnosis predicts a unique but small part of variance in initial arithmetic performance.
Wasner, Mirjam; Nuerk, Hans-Christoph; Martignon, Laura; Roesch, Stephanie; Moeller, Korbinian
2016-06-01
Recent studies indicated that finger gnosis (i.e., the ability to perceive and differentiate one's own fingers) is associated reliably with basic numerical competencies. In this study, we aimed at examining whether finger gnosis is also a unique predictor for initial arithmetic competencies at the beginning of first grade-and thus before formal math instruction starts. Therefore, we controlled for influences of domain-specific numerical precursor competencies, domain-general cognitive ability, and natural variables such as gender and age. Results from 321 German first-graders revealed that finger gnosis indeed predicted a unique and relevant but nevertheless only small part of the variance in initial arithmetic performance (∼1%-2%) as compared with influences of general cognitive ability and numerical precursor competencies. Taken together, these results substantiated the notion of a unique association between finger gnosis and arithmetic and further corroborate the theoretical idea of finger-based representations contributing to numerical cognition. However, the only small part of variance explained by finger gnosis seems to limit its relevance for diagnostic purposes. PMID:26895483
Vitezica, Zulma G.; Varona, Luis; Legarra, Andres
2013-01-01
Genomic evaluation models can fit additive and dominant SNP effects. Under quantitative genetics theory, additive or “breeding” values of individuals are generated by substitution effects, which involve both “biological” additive and dominant effects of the markers. Dominance deviations include only a portion of the biological dominant effects of the markers. Additive variance includes variation due to the additive and dominant effects of the markers. We describe a matrix of dominant genomic relationships across individuals, D, which is similar to the G matrix used in genomic best linear unbiased prediction. This matrix can be used in a mixed-model context for genomic evaluations or to estimate dominant and additive variances in the population. From the “genotypic” value of individuals, an alternative parameterization defines additive and dominance as the parts attributable to the additive and dominant effect of the markers. This approach underestimates the additive genetic variance and overestimates the dominance variance. Transforming the variances from one model into the other is trivial if the distribution of allelic frequencies is known. We illustrate these results with mouse data (four traits, 1884 mice, and 10,946 markers) and simulated data (2100 individuals and 10,000 markers). Variance components were estimated correctly in the model, considering breeding values and dominance deviations. For the model considering genotypic values, the inclusion of dominant effects biased the estimate of additive variance. Genomic models were more accurate for the estimation of variance components than their pedigree-based counterparts. PMID:24121775
Estimation of Additive, Dominance, and Imprinting Genetic Variance Using Genomic Data
Lopes, Marcos S.; Bastiaansen, John W. M.; Janss, Luc; Knol, Egbert F.; Bovenhuis, Henk
2015-01-01
Traditionally, exploration of genetic variance in humans, plants, and livestock species has been limited mostly to the use of additive effects estimated using pedigree data. However, with the development of dense panels of single-nucleotide polymorphisms (SNPs), the exploration of genetic variation of complex traits is moving from quantifying the resemblance between family members to the dissection of genetic variation at individual loci. With SNPs, we were able to quantify the contribution of additive, dominance, and imprinting variance to the total genetic variance by using a SNP regression method. The method was validated in simulated data and applied to three traits (number of teats, backfat, and lifetime daily gain) in three purebred pig populations. In simulated data, the estimates of additive, dominance, and imprinting variance were very close to the simulated values. In real data, dominance effects account for a substantial proportion of the total genetic variance (up to 44%) for these traits in these populations. The contribution of imprinting to the total phenotypic variance of the evaluated traits was relatively small (1–3%). Our results indicate a strong relationship between additive variance explained per chromosome and chromosome length, which has been described previously for other traits in other species. We also show that a similar linear relationship exists for dominance and imprinting variance. These novel results improve our understanding of the genetic architecture of the evaluated traits and shows promise to apply the SNP regression method to other traits and species, including human diseases. PMID:26438289
Fuerst, C; Sölkner, J
1994-04-01
Additive and nonadditive genetic variances were estimated for yield traits and fertility for three subsequent lactations and for lifetime performance traits of purebred and crossbred dairy cattle populations. Traits were milk yield, energy-corrected milk yield, fat percentage, protein percentage, calving interval, length of productive life, and lifetime FCM of purebred Simmental, Simmental including crossbreds, and Braunvieh crossed with Brown Swiss. Data files ranged from 66,740 to 375,093 records. An approach based on pedigree information for sire and maternal grandsire was used and included additive, dominance, and additive by additive genetic effects. Variances were estimated using the tildehat approximation to REML. Heritability estimated without nonadditive effects in the model was overestimated, particularly in presence of additive by additive variance. Dominance variance was important for most traits; for the lifetime performance traits, dominance was clearly higher than additive variance. Additive by additive variance was very high for milk yield and energy-corrected milk yield, especially for data including crossbreds. Effect of inbreeding was low in most cases. Inclusion of nonadditive effects in genetic evaluation models might improve estimation of additive effects and may require consideration for dairy cattle breeding programs.
McGuigan, Katrina; Blows, Mark W
2010-07-01
Genetic covariation among multiple traits will bias the direction of evolution. Although a trait's phenotypic context is crucial for understanding evolutionary constraints, the evolutionary potential of one (focal) trait, rather than the whole phenotype, is often of interest. The extent to which a focal trait can evolve independently depends on how much of the genetic variance in that trait is unique. Here, we present a hypothesis-testing framework for estimating the genetic variance in a focal trait that is independent of variance in other traits. We illustrate our analytical approach using two Drosophila bunnanda trait sets: a contact pheromone system comprised of cuticular hydrocarbons (CHCs), and wing shape, characterized by relative warps of vein position coordinates. Only 9% of the additive genetic variation in CHCs was trait specific, suggesting individual traits are unlikely to evolve independently. In contrast, most (72%) of the additive genetic variance in wing shape was trait specific, suggesting relative warp representations of wing shape could evolve independently. The identification of genetic variance in focal traits that is independent of other traits provides a way of studying the evolvability of individual traits within the broader context of the multivariate phenotype.
Epistasis Is a Major Determinant of the Additive Genetic Variance in Mimulus guttatus
Monnahan, Patrick J.; Kelly, John K.
2015-01-01
The influence of genetic interactions (epistasis) on the genetic variance of quantitative traits is a major unresolved problem relevant to medical, agricultural, and evolutionary genetics. The additive genetic component is typically a high proportion of the total genetic variance in quantitative traits, despite that underlying genes must interact to determine phenotype. This study estimates direct and interaction effects for 11 pairs of Quantitative Trait Loci (QTLs) affecting floral traits within a single population of Mimulus guttatus. With estimates of all 9 genotypes for each QTL pair, we are able to map from QTL effects to variance components as a function of population allele frequencies, and thus predict changes in variance components as allele frequencies change. This mapping requires an analytical framework that properly accounts for bias introduced by estimation errors. We find that even with abundant interactions between QTLs, most of the genetic variance is likely to be additive. However, the strong dependency of allelic average effects on genetic background implies that epistasis is a major determinant of the additive genetic variance, and thus, the population’s ability to respond to selection. PMID:25946702
ERIC Educational Resources Information Center
Miller, Geoffrey F.; Penke, Lars
2007-01-01
Most theories of human mental evolution assume that selection favored higher intelligence and larger brains, which should have reduced genetic variance in both. However, adult human intelligence remains highly heritable, and is genetically correlated with brain size. This conflict might be resolved by estimating the coefficient of additive genetic…
NASA Astrophysics Data System (ADS)
Soltani-Mohammadi, Saeed; Safa, Mohammad; Mokhtari, Hadi
2016-10-01
One of the most important stages in complementary exploration is optimal designing the additional drilling pattern or defining the optimum number and location of additional boreholes. Quite a lot research has been carried out in this regard in which for most of the proposed algorithms, kriging variance minimization as a criterion for uncertainty assessment is defined as objective function and the problem could be solved through optimization methods. Although kriging variance implementation is known to have many advantages in objective function definition, it is not sensitive to local variability. As a result, the only factors evaluated for locating the additional boreholes are initial data configuration and variogram model parameters and the effects of local variability are omitted. In this paper, with the goal of considering the local variability in boundaries uncertainty assessment, the application of combined variance is investigated to define the objective function. Thus in order to verify the applicability of the proposed objective function, it is used to locate the additional boreholes in Esfordi phosphate mine through the implementation of metaheuristic optimization methods such as simulated annealing and particle swarm optimization. Comparison of results from the proposed objective function and conventional methods indicates that the new changes imposed on the objective function has caused the algorithm output to be sensitive to the variations of grade, domain's boundaries and the thickness of mineralization domain. The comparison between the results of different optimization algorithms proved that for the presented case the application of particle swarm optimization is more appropriate than simulated annealing.
Forsberg, Simon K G; Andreatta, Matthew E; Huang, Xin-Yuan; Danku, John; Salt, David E; Carlborg, Örjan
2015-11-01
Genome-wide association (GWA) analyses have generally been used to detect individual loci contributing to the phenotypic diversity in a population by the effects of these loci on the trait mean. More rarely, loci have also been detected based on variance differences between genotypes. Several hypotheses have been proposed to explain the possible genetic mechanisms leading to such variance signals. However, little is known about what causes these signals, or whether this genetic variance-heterogeneity reflects mechanisms of importance in natural populations. Previously, we identified a variance-heterogeneity GWA (vGWA) signal for leaf molybdenum concentrations in Arabidopsis thaliana. Here, fine-mapping of this association reveals that the vGWA emerges from the effects of three independent genetic polymorphisms that all are in strong LD with the markers displaying the genetic variance-heterogeneity. By revealing the genetic architecture underlying this vGWA signal, we uncovered the molecular source of a significant amount of hidden additive genetic variation or "missing heritability". Two of the three polymorphisms underlying the genetic variance-heterogeneity are promoter variants for Molybdate transporter 1 (MOT1), and the third a variant located ~25 kb downstream of this gene. A fourth independent association was also detected ~600 kb upstream of MOT1. Use of a T-DNA knockout allele highlights Copper Transporter 6; COPT6 (AT2G26975) as a strong candidate gene for this association. Our results show that an extended LD across a complex locus including multiple functional alleles can lead to a variance-heterogeneity between genotypes in natural populations. Further, they provide novel insights into the genetic regulation of ion homeostasis in A. thaliana, and empirically confirm that variance-heterogeneity based GWA methods are a valuable tool to detect novel associations of biological importance in natural populations.
Forsberg, Simon K. G.; Andreatta, Matthew E.; Huang, Xin-Yuan; Danku, John; Salt, David E.; Carlborg, Örjan
2015-01-01
Genome-wide association (GWA) analyses have generally been used to detect individual loci contributing to the phenotypic diversity in a population by the effects of these loci on the trait mean. More rarely, loci have also been detected based on variance differences between genotypes. Several hypotheses have been proposed to explain the possible genetic mechanisms leading to such variance signals. However, little is known about what causes these signals, or whether this genetic variance-heterogeneity reflects mechanisms of importance in natural populations. Previously, we identified a variance-heterogeneity GWA (vGWA) signal for leaf molybdenum concentrations in Arabidopsis thaliana. Here, fine-mapping of this association reveals that the vGWA emerges from the effects of three independent genetic polymorphisms that all are in strong LD with the markers displaying the genetic variance-heterogeneity. By revealing the genetic architecture underlying this vGWA signal, we uncovered the molecular source of a significant amount of hidden additive genetic variation or “missing heritability”. Two of the three polymorphisms underlying the genetic variance-heterogeneity are promoter variants for Molybdate transporter 1 (MOT1), and the third a variant located ~25 kb downstream of this gene. A fourth independent association was also detected ~600 kb upstream of MOT1. Use of a T-DNA knockout allele highlights Copper Transporter 6; COPT6 (AT2G26975) as a strong candidate gene for this association. Our results show that an extended LD across a complex locus including multiple functional alleles can lead to a variance-heterogeneity between genotypes in natural populations. Further, they provide novel insights into the genetic regulation of ion homeostasis in A. thaliana, and empirically confirm that variance-heterogeneity based GWA methods are a valuable tool to detect novel associations of biological importance in natural populations. PMID:26599497
Huchard, E; Charmantier, A; English, S; Bateman, A; Nielsen, J F; Clutton-Brock, T
2014-09-01
Individual variation in growth is high in cooperative breeders and may reflect plastic divergence in developmental trajectories leading to breeding vs. helping phenotypes. However, the relative importance of additive genetic variance and developmental plasticity in shaping growth trajectories is largely unknown in cooperative vertebrates. This study exploits weekly sequences of body mass from birth to adulthood to investigate sources of variance in, and covariance between, early and later growth in wild meerkats (Suricata suricatta), a cooperative mongoose. Our results indicate that (i) the correlation between early growth (prior to nutritional independence) and adult mass is positive but weak, and there are frequent changes (compensatory growth) in post-independence growth trajectories; (ii) among parameters describing growth trajectories, those describing growth rate (prior to and at nutritional independence) show undetectable heritability while associated size parameters (mass at nutritional independence and asymptotic mass) are moderately heritable (0.09 ≤ h(2) < 0.3); and (iii) additive genetic effects, rather than early environmental effects, mediate the covariance between early growth and adult mass. These results reveal that meerkat growth trajectories remain plastic throughout development, rather than showing early and irreversible divergence, and that the weak effects of early growth on adult mass, an important determinant of breeding success, are partly genetic. In contrast to most cooperative invertebrates, the acquisition of breeding status is often determined after sexual maturity and strongly impacted by chance in many cooperative vertebrates, who may therefore retain the ability to adjust their morphology to environmental changes and social opportunities arising throughout their development, rather than specializing early.
Huchard, E; Charmantier, A; English, S; Bateman, A; Nielsen, J F; Clutton-Brock, T
2014-09-01
Individual variation in growth is high in cooperative breeders and may reflect plastic divergence in developmental trajectories leading to breeding vs. helping phenotypes. However, the relative importance of additive genetic variance and developmental plasticity in shaping growth trajectories is largely unknown in cooperative vertebrates. This study exploits weekly sequences of body mass from birth to adulthood to investigate sources of variance in, and covariance between, early and later growth in wild meerkats (Suricata suricatta), a cooperative mongoose. Our results indicate that (i) the correlation between early growth (prior to nutritional independence) and adult mass is positive but weak, and there are frequent changes (compensatory growth) in post-independence growth trajectories; (ii) among parameters describing growth trajectories, those describing growth rate (prior to and at nutritional independence) show undetectable heritability while associated size parameters (mass at nutritional independence and asymptotic mass) are moderately heritable (0.09 ≤ h(2) < 0.3); and (iii) additive genetic effects, rather than early environmental effects, mediate the covariance between early growth and adult mass. These results reveal that meerkat growth trajectories remain plastic throughout development, rather than showing early and irreversible divergence, and that the weak effects of early growth on adult mass, an important determinant of breeding success, are partly genetic. In contrast to most cooperative invertebrates, the acquisition of breeding status is often determined after sexual maturity and strongly impacted by chance in many cooperative vertebrates, who may therefore retain the ability to adjust their morphology to environmental changes and social opportunities arising throughout their development, rather than specializing early. PMID:24962704
Nietlisbach, Pirmin; Hadfield, Jarrod D
2015-07-01
Whenever allele frequencies are unequal, nonadditive gene action contributes to additive genetic variance and therefore the resemblance between parents and offspring. The reason for this has not been easy to understand. Here, we present a new single-locus decomposition of additive genetic variance that may give greater intuition about this important result. We show that the contribution of dominant gene action to parent-offspring resemblance only depends on the degree to which the heterozygosity of parents and offspring covary. Thus, dominant gene action only contributes to additive genetic variance when heterozygosity is heritable. Under most circumstances this is the case because individuals with rare alleles are more likely to be heterozygous, and because they pass rare alleles to their offspring they also tend to have heterozygous offspring. When segregating alleles are at equal frequency there are no rare alleles, the heterozygosities of parents and offspring are uncorrelated and dominant gene action does not contribute to additive genetic variance. PMID:26100570
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-17
... requesting a variance from vegetation standards for levees and floodwalls to reflect organizational changes... as armoring or overbuilt sections) intended to preserve system reliability and resiliency...
Travers, L M; Simmons, L W; Garcia-Gonzalez, F
2016-05-01
Polyandry is widespread despite its costs. The sexually selected sperm hypotheses ('sexy' and 'good' sperm) posit that sperm competition plays a role in the evolution of polyandry. Two poorly studied assumptions of these hypotheses are the presence of additive genetic variance in polyandry and sperm competitiveness. Using a quantitative genetic breeding design in a natural population of Drosophila melanogaster, we first established the potential for polyandry to respond to selection. We then investigated whether polyandry can evolve through sexually selected sperm processes. We measured lifetime polyandry and offensive sperm competitiveness (P2 ) while controlling for sampling variance due to male × male × female interactions. We also measured additive genetic variance in egg-to-adult viability and controlled for its effect on P2 estimates. Female lifetime polyandry showed significant and substantial additive genetic variance and evolvability. In contrast, we found little genetic variance or evolvability in P2 or egg-to-adult viability. Additive genetic variance in polyandry highlights its potential to respond to selection. However, the low levels of genetic variance in sperm competitiveness suggest that the evolution of polyandry may not be driven by sexy sperm or good sperm processes.
Travers, L M; Simmons, L W; Garcia-Gonzalez, F
2016-05-01
Polyandry is widespread despite its costs. The sexually selected sperm hypotheses ('sexy' and 'good' sperm) posit that sperm competition plays a role in the evolution of polyandry. Two poorly studied assumptions of these hypotheses are the presence of additive genetic variance in polyandry and sperm competitiveness. Using a quantitative genetic breeding design in a natural population of Drosophila melanogaster, we first established the potential for polyandry to respond to selection. We then investigated whether polyandry can evolve through sexually selected sperm processes. We measured lifetime polyandry and offensive sperm competitiveness (P2 ) while controlling for sampling variance due to male × male × female interactions. We also measured additive genetic variance in egg-to-adult viability and controlled for its effect on P2 estimates. Female lifetime polyandry showed significant and substantial additive genetic variance and evolvability. In contrast, we found little genetic variance or evolvability in P2 or egg-to-adult viability. Additive genetic variance in polyandry highlights its potential to respond to selection. However, the low levels of genetic variance in sperm competitiveness suggest that the evolution of polyandry may not be driven by sexy sperm or good sperm processes. PMID:26801640
McFarlane, S Eryn; Gorrell, Jamieson C; Coltman, David W; Humphries, Murray M; Boutin, Stan; McAdam, Andrew G
2014-05-01
A trait must genetically correlate with fitness in order to evolve in response to natural selection, but theory suggests that strong directional selection should erode additive genetic variance in fitness and limit future evolutionary potential. Balancing selection has been proposed as a mechanism that could maintain genetic variance if fitness components trade off with one another and has been invoked to account for empirical observations of higher levels of additive genetic variance in fitness components than would be expected from mutation-selection balance. Here, we used a long-term study of an individually marked population of North American red squirrels (Tamiasciurus hudsonicus) to look for evidence of (1) additive genetic variance in lifetime reproductive success and (2) fitness trade-offs between fitness components, such as male and female fitness or fitness in high- and low-resource environments. "Animal model" analyses of a multigenerational pedigree revealed modest maternal effects on fitness, but very low levels of additive genetic variance in lifetime reproductive success overall as well as fitness measures within each sex and environment. It therefore appears that there are very low levels of direct genetic variance in fitness and fitness components in red squirrels to facilitate contemporary adaptation in this population.
McFarlane, S Eryn; Gorrell, Jamieson C; Coltman, David W; Humphries, Murray M; Boutin, Stan; McAdam, Andrew G
2014-01-01
A trait must genetically correlate with fitness in order to evolve in response to natural selection, but theory suggests that strong directional selection should erode additive genetic variance in fitness and limit future evolutionary potential. Balancing selection has been proposed as a mechanism that could maintain genetic variance if fitness components trade off with one another and has been invoked to account for empirical observations of higher levels of additive genetic variance in fitness components than would be expected from mutation–selection balance. Here, we used a long-term study of an individually marked population of North American red squirrels (Tamiasciurus hudsonicus) to look for evidence of (1) additive genetic variance in lifetime reproductive success and (2) fitness trade-offs between fitness components, such as male and female fitness or fitness in high- and low-resource environments. “Animal model” analyses of a multigenerational pedigree revealed modest maternal effects on fitness, but very low levels of additive genetic variance in lifetime reproductive success overall as well as fitness measures within each sex and environment. It therefore appears that there are very low levels of direct genetic variance in fitness and fitness components in red squirrels to facilitate contemporary adaptation in this population. PMID:24963372
Lachowiec, Jennifer; Shen, Xia; Queitsch, Christine; Carlborg, Örjan
2015-01-01
Efforts to identify loci underlying complex traits generally assume that most genetic variance is additive. Here, we examined the genetics of Arabidopsis thaliana root length and found that the genomic narrow-sense heritability for this trait in the examined population was statistically zero. The low amount of additive genetic variance that could be captured by the genome-wide genotypes likely explains why no associations to root length could be found using standard additive-model-based genome-wide association (GWA) approaches. However, as the broad-sense heritability for root length was significantly larger, and primarily due to epistasis, we also performed an epistatic GWA analysis to map loci contributing to the epistatic genetic variance. Four interacting pairs of loci were revealed, involving seven chromosomal loci that passed a standard multiple-testing corrected significance threshold. The genotype-phenotype maps for these pairs revealed epistasis that cancelled out the additive genetic variance, explaining why these loci were not detected in the additive GWA analysis. Small population sizes, such as in our experiment, increase the risk of identifying false epistatic interactions due to testing for associations with very large numbers of multi-marker genotypes in few phenotyped individuals. Therefore, we estimated the false-positive risk using a new statistical approach that suggested half of the associated pairs to be true positive associations. Our experimental evaluation of candidate genes within the seven associated loci suggests that this estimate is conservative; we identified functional candidate genes that affected root development in four loci that were part of three of the pairs. The statistical epistatic analyses were thus indispensable for confirming known, and identifying new, candidate genes for root length in this population of wild-collected A. thaliana accessions. We also illustrate how epistatic cancellation of the additive genetic variance
Lachowiec, Jennifer; Shen, Xia; Queitsch, Christine; Carlborg, Örjan
2015-01-01
Efforts to identify loci underlying complex traits generally assume that most genetic variance is additive. Here, we examined the genetics of Arabidopsis thaliana root length and found that the genomic narrow-sense heritability for this trait in the examined population was statistically zero. The low amount of additive genetic variance that could be captured by the genome-wide genotypes likely explains why no associations to root length could be found using standard additive-model-based genome-wide association (GWA) approaches. However, as the broad-sense heritability for root length was significantly larger, and primarily due to epistasis, we also performed an epistatic GWA analysis to map loci contributing to the epistatic genetic variance. Four interacting pairs of loci were revealed, involving seven chromosomal loci that passed a standard multiple-testing corrected significance threshold. The genotype-phenotype maps for these pairs revealed epistasis that cancelled out the additive genetic variance, explaining why these loci were not detected in the additive GWA analysis. Small population sizes, such as in our experiment, increase the risk of identifying false epistatic interactions due to testing for associations with very large numbers of multi-marker genotypes in few phenotyped individuals. Therefore, we estimated the false-positive risk using a new statistical approach that suggested half of the associated pairs to be true positive associations. Our experimental evaluation of candidate genes within the seven associated loci suggests that this estimate is conservative; we identified functional candidate genes that affected root development in four loci that were part of three of the pairs. The statistical epistatic analyses were thus indispensable for confirming known, and identifying new, candidate genes for root length in this population of wild-collected A. thaliana accessions. We also illustrate how epistatic cancellation of the additive genetic variance
Su, Guosheng; Christensen, Ole F.; Ostersen, Tage; Henryon, Mark; Lund, Mogens S.
2012-01-01
Non-additive genetic variation is usually ignored when genome-wide markers are used to study the genetic architecture and genomic prediction of complex traits in human, wild life, model organisms or farm animals. However, non-additive genetic effects may have an important contribution to total genetic variation of complex traits. This study presented a genomic BLUP model including additive and non-additive genetic effects, in which additive and non-additive genetic relation matrices were constructed from information of genome-wide dense single nucleotide polymorphism (SNP) markers. In addition, this study for the first time proposed a method to construct dominance relationship matrix using SNP markers and demonstrated it in detail. The proposed model was implemented to investigate the amounts of additive genetic, dominance and epistatic variations, and assessed the accuracy and unbiasedness of genomic predictions for daily gain in pigs. In the analysis of daily gain, four linear models were used: 1) a simple additive genetic model (MA), 2) a model including both additive and additive by additive epistatic genetic effects (MAE), 3) a model including both additive and dominance genetic effects (MAD), and 4) a full model including all three genetic components (MAED). Estimates of narrow-sense heritability were 0.397, 0.373, 0.379 and 0.357 for models MA, MAE, MAD and MAED, respectively. Estimated dominance variance and additive by additive epistatic variance accounted for 5.6% and 9.5% of the total phenotypic variance, respectively. Based on model MAED, the estimate of broad-sense heritability was 0.506. Reliabilities of genomic predicted breeding values for the animals without performance records were 28.5%, 28.8%, 29.2% and 29.5% for models MA, MAE, MAD and MAED, respectively. In addition, models including non-additive genetic effects improved unbiasedness of genomic predictions. PMID:23028912
Kumar, Satish; Molloy, Claire; Muñoz, Patricio; Daetwyler, Hans; Chagné, David; Volz, Richard
2015-12-01
The nonadditive genetic effects may have an important contribution to total genetic variation of phenotypes, so estimates of both the additive and nonadditive effects are desirable for breeding and selection purposes. Our main objectives were to: estimate additive, dominance and epistatic variances of apple (Malus × domestica Borkh.) phenotypes using relationship matrices constructed from genome-wide dense single nucleotide polymorphism (SNP) markers; and compare the accuracy of genomic predictions using genomic best linear unbiased prediction models with or without including nonadditive genetic effects. A set of 247 clonally replicated individuals was assessed for six fruit quality traits at two sites, and also genotyped using an Illumina 8K SNP array. Across several fruit quality traits, the additive, dominance, and epistatic effects contributed about 30%, 16%, and 19%, respectively, to the total phenotypic variance. Models ignoring nonadditive components yielded upwardly biased estimates of additive variance (heritability) for all traits in this study. The accuracy of genomic predicted genetic values (GEGV) varied from about 0.15 to 0.35 for various traits, and these were almost identical for models with or without including nonadditive effects. However, models including nonadditive genetic effects further reduced the bias of GEGV. Between-site genotypic correlations were high (>0.85) for all traits, and genotype-site interaction accounted for <10% of the phenotypic variability. The accuracy of prediction, when the validation set was present only at one site, was generally similar for both sites, and varied from about 0.50 to 0.85. The prediction accuracies were strongly influenced by trait heritability, and genetic relatedness between the training and validation families.
Kumar, Satish; Molloy, Claire; Muñoz, Patricio; Daetwyler, Hans; Chagné, David; Volz, Richard
2015-01-01
The nonadditive genetic effects may have an important contribution to total genetic variation of phenotypes, so estimates of both the additive and nonadditive effects are desirable for breeding and selection purposes. Our main objectives were to: estimate additive, dominance and epistatic variances of apple (Malus × domestica Borkh.) phenotypes using relationship matrices constructed from genome-wide dense single nucleotide polymorphism (SNP) markers; and compare the accuracy of genomic predictions using genomic best linear unbiased prediction models with or without including nonadditive genetic effects. A set of 247 clonally replicated individuals was assessed for six fruit quality traits at two sites, and also genotyped using an Illumina 8K SNP array. Across several fruit quality traits, the additive, dominance, and epistatic effects contributed about 30%, 16%, and 19%, respectively, to the total phenotypic variance. Models ignoring nonadditive components yielded upwardly biased estimates of additive variance (heritability) for all traits in this study. The accuracy of genomic predicted genetic values (GEGV) varied from about 0.15 to 0.35 for various traits, and these were almost identical for models with or without including nonadditive effects. However, models including nonadditive genetic effects further reduced the bias of GEGV. Between-site genotypic correlations were high (>0.85) for all traits, and genotype-site interaction accounted for <10% of the phenotypic variability. The accuracy of prediction, when the validation set was present only at one site, was generally similar for both sites, and varied from about 0.50 to 0.85. The prediction accuracies were strongly influenced by trait heritability, and genetic relatedness between the training and validation families. PMID:26497141
Da, Yang; Wang, Chunkao; Wang, Shengwen; Hu, Guo
2014-01-01
We established a genomic model of quantitative trait with genomic additive and dominance relationships that parallels the traditional quantitative genetics model, which partitions a genotypic value as breeding value plus dominance deviation and calculates additive and dominance relationships using pedigree information. Based on this genomic model, two sets of computationally complementary but mathematically identical mixed model methods were developed for genomic best linear unbiased prediction (GBLUP) and genomic restricted maximum likelihood estimation (GREML) of additive and dominance effects using SNP markers. These two sets are referred to as the CE and QM sets, where the CE set was designed for large numbers of markers and the QM set was designed for large numbers of individuals. GBLUP and associated accuracy formulations for individuals in training and validation data sets were derived for breeding values, dominance deviations and genotypic values. Simulation study showed that GREML and GBLUP generally were able to capture small additive and dominance effects that each accounted for 0.00005-0.0003 of the phenotypic variance and GREML was able to differentiate true additive and dominance heritability levels. GBLUP of the total genetic value as the summation of additive and dominance effects had higher prediction accuracy than either additive or dominance GBLUP, causal variants had the highest accuracy of GREML and GBLUP, and predicted accuracies were in agreement with observed accuracies. Genomic additive and dominance relationship matrices using SNP markers were consistent with theoretical expectations. The GREML and GBLUP methods can be an effective tool for assessing the type and magnitude of genetic effects affecting a phenotype and for predicting the total genetic value at the whole genome level.
Careau, Vincent; Wolak, Matthew E; Carter, Patrick A; Garland, Theodore
2015-11-22
Given the pace at which human-induced environmental changes occur, a pressing challenge is to determine the speed with which selection can drive evolutionary change. A key determinant of adaptive response to multivariate phenotypic selection is the additive genetic variance-covariance matrix ( G: ). Yet knowledge of G: in a population experiencing new or altered selection is not sufficient to predict selection response because G: itself evolves in ways that are poorly understood. We experimentally evaluated changes in G: when closely related behavioural traits experience continuous directional selection. We applied the genetic covariance tensor approach to a large dataset (n = 17 328 individuals) from a replicated, 31-generation artificial selection experiment that bred mice for voluntary wheel running on days 5 and 6 of a 6-day test. Selection on this subset of G: induced proportional changes across the matrix for all 6 days of running behaviour within the first four generations. The changes in G: induced by selection resulted in a fourfold slower-than-predicted rate of response to selection. Thus, selection exacerbated constraints within G: and limited future adaptive response, a phenomenon that could have profound consequences for populations facing rapid environmental change.
Careau, Vincent; Wolak, Matthew E; Carter, Patrick A; Garland, Theodore
2015-11-22
Given the pace at which human-induced environmental changes occur, a pressing challenge is to determine the speed with which selection can drive evolutionary change. A key determinant of adaptive response to multivariate phenotypic selection is the additive genetic variance-covariance matrix ( G: ). Yet knowledge of G: in a population experiencing new or altered selection is not sufficient to predict selection response because G: itself evolves in ways that are poorly understood. We experimentally evaluated changes in G: when closely related behavioural traits experience continuous directional selection. We applied the genetic covariance tensor approach to a large dataset (n = 17 328 individuals) from a replicated, 31-generation artificial selection experiment that bred mice for voluntary wheel running on days 5 and 6 of a 6-day test. Selection on this subset of G: induced proportional changes across the matrix for all 6 days of running behaviour within the first four generations. The changes in G: induced by selection resulted in a fourfold slower-than-predicted rate of response to selection. Thus, selection exacerbated constraints within G: and limited future adaptive response, a phenomenon that could have profound consequences for populations facing rapid environmental change. PMID:26582016
MCNP variance reduction overview
Hendricks, J.S.; Booth, T.E.
1985-01-01
The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code.
Kirkpatrick, Robert M; McGue, Matt; Iacono, William G
2015-03-01
The present study of general cognitive ability attempts to replicate and extend previous investigations of a biometric moderator, family-of-origin socioeconomic status (SES), in a sample of 2,494 pairs of adolescent twins, non-twin biological siblings, and adoptive siblings assessed with individually administered IQ tests. We hypothesized that SES would covary positively with additive-genetic variance and negatively with shared-environmental variance. Important potential confounds unaddressed in some past studies, such as twin-specific effects, assortative mating, and differential heritability by trait level, were found to be negligible. In our main analysis, we compared models by their sample-size corrected AIC, and base our statistical inference on model-averaged point estimates and standard errors. Additive-genetic variance increased with SES-an effect that was statistically significant and robust to model specification. We found no evidence that SES moderated shared-environmental influence. We attempt to explain the inconsistent replication record of these effects, and provide suggestions for future research. PMID:25539975
Kirkpatrick, Robert M.; McGue, Matt; Iacono, William G.
2015-01-01
The present study of general cognitive ability attempts to replicate and extend previous investigations of a biometric moderator, family-of-origin socioeconomic status (SES), in a sample of 2,494 pairs of adolescent twins, non-twin biological siblings, and adoptive siblings assessed with individually administered IQ tests. We hypothesized that SES would covary positively with additive-genetic variance and negatively with shared-environmental variance. Important potential confounds unaddressed in some past studies, such as twin-specific effects, assortative mating, and differential heritability by trait level, were found to be negligible. In our main analysis, we compared models by their sample-size corrected AIC, and base our statistical inference on model-averaged point estimates and standard errors. Additive-genetic variance increased with SES—an effect that was statistically significant and robust to model specification. We found no evidence that SES moderated shared-environmental influence. We attempt to explain the inconsistent replication record of these effects, and provide suggestions for future research. PMID:25539975
Kirkpatrick, Robert M; McGue, Matt; Iacono, William G
2015-03-01
The present study of general cognitive ability attempts to replicate and extend previous investigations of a biometric moderator, family-of-origin socioeconomic status (SES), in a sample of 2,494 pairs of adolescent twins, non-twin biological siblings, and adoptive siblings assessed with individually administered IQ tests. We hypothesized that SES would covary positively with additive-genetic variance and negatively with shared-environmental variance. Important potential confounds unaddressed in some past studies, such as twin-specific effects, assortative mating, and differential heritability by trait level, were found to be negligible. In our main analysis, we compared models by their sample-size corrected AIC, and base our statistical inference on model-averaged point estimates and standard errors. Additive-genetic variance increased with SES-an effect that was statistically significant and robust to model specification. We found no evidence that SES moderated shared-environmental influence. We attempt to explain the inconsistent replication record of these effects, and provide suggestions for future research.
Reid, Jane M; Arcese, Peter; Keller, Lukas F; Losdat, Sylvain
2014-01-01
Ongoing evolution of polyandry, and consequent extra-pair reproduction in socially monogamous systems, is hypothesized to be facilitated by indirect selection stemming from cross-sex genetic covariances with components of male fitness. Specifically, polyandry is hypothesized to create positive genetic covariance with male paternity success due to inevitable assortative reproduction, driving ongoing coevolution. However, it remains unclear whether such covariances could or do emerge within complex polyandrous systems. First, we illustrate that genetic covariances between female extra-pair reproduction and male within-pair paternity success might be constrained in socially monogamous systems where female and male additive genetic effects can have opposing impacts on the paternity of jointly reared offspring. Second, we demonstrate nonzero additive genetic variance in female liability for extra-pair reproduction and male liability for within-pair paternity success, modeled as direct and associative genetic effects on offspring paternity, respectively, in free-living song sparrows (Melospiza melodia). The posterior mean additive genetic covariance between these liabilities was slightly positive, but the credible interval was wide and overlapped zero. Therefore, although substantial total additive genetic variance exists, the hypothesis that ongoing evolution of female extra-pair reproduction is facilitated by genetic covariance with male within-pair paternity success cannot yet be definitively supported or rejected either conceptually or empirically. PMID:24724612
Reid, Jane M; Arcese, Peter; Keller, Lukas F; Losdat, Sylvain
2014-08-01
Ongoing evolution of polyandry, and consequent extra-pair reproduction in socially monogamous systems, is hypothesized to be facilitated by indirect selection stemming from cross-sex genetic covariances with components of male fitness. Specifically, polyandry is hypothesized to create positive genetic covariance with male paternity success due to inevitable assortative reproduction, driving ongoing coevolution. However, it remains unclear whether such covariances could or do emerge within complex polyandrous systems. First, we illustrate that genetic covariances between female extra-pair reproduction and male within-pair paternity success might be constrained in socially monogamous systems where female and male additive genetic effects can have opposing impacts on the paternity of jointly reared offspring. Second, we demonstrate nonzero additive genetic variance in female liability for extra-pair reproduction and male liability for within-pair paternity success, modeled as direct and associative genetic effects on offspring paternity, respectively, in free-living song sparrows (Melospiza melodia). The posterior mean additive genetic covariance between these liabilities was slightly positive, but the credible interval was wide and overlapped zero. Therefore, although substantial total additive genetic variance exists, the hypothesis that ongoing evolution of female extra-pair reproduction is facilitated by genetic covariance with male within-pair paternity success cannot yet be definitively supported or rejected either conceptually or empirically.
NASA Astrophysics Data System (ADS)
Good, Ron; Fletcher, Harold J.
The importance of reporting explained variance (sometimes referred to as magnitude of effects) in ANOVA designs is discussed in this paper. Explained variance is an estimate of the strength of the relationship between treatment (or other factors such as sex, grade level, etc.) and dependent variables of interest to the researcher(s). Three methods that can be used to obtain estimates of explained variance in ANOVA designs are described and applied to 16 studies that were reported in recent volumes of this journal. The results show that, while in most studies the treatment accounts for a relatively small proportion of the variance in dependent variable scores., in., some studies the magnitude of the treatment effect is respectable. The authors recommend that researchers in science education report explained variance in addition to the commonly reported tests of significance, since the latter are inadequate as the sole basis for making decisions about the practical importance of factors of interest to science education researchers.
Berge, Jerica M.; Wall, Melanie; Larson, Nicole; Eisenberg, Marla E.; Loth, Katie A.; Neumark-Sztainer, Dianne
2012-01-01
Objective To examine the unique and additive associations of family functioning and parenting practices with adolescent disordered eating behaviors (i.e., dieting, unhealthy weight control behaviors, binge eating). Methods Data from EAT (Eating and Activity in Teens) 2010, a population-based study assessing eating and activity among racially/ethnically and socio-economically diverse adolescents (n = 2,793; mean age = 14.4, SD = 2.0; age range = 11–19) was used. Logistic regression models were used to examine associations between adolescent dieting and disordered eating behaviors and family functioning and parenting variables, including interactions. All analyses controlled for demographics and body mass index. Results Higher family functioning, parent connection, and parental knowledge about child’s whereabouts (e.g. who child is with, what they are doing, where they are at) were significantly associated with lower odds of engaging in dieting and disordered eating behaviors in adolescents, while parent psychological control was associated with greater odds of engaging in dieting and disordered eating behaviors. Although the majority of interactions were non-significant, parental psychological control moderated the protective relationship between family functioning and disordered eating behaviors in adolescent girls. Conclusions Clinicians and health care providers may want to discuss the importance of balancing specific parenting behaviors, such as increasing parent knowledge about child whereabouts while decreasing psychological control in order to enhance the protective relationship between family functioning and disordered eating behaviors in adolescents. PMID:23196919
Berge, Jerica M; Wall, Melanie; Larson, Nicole; Eisenberg, Marla E; Loth, Katie A; Neumark-Sztainer, Dianne
2014-04-01
To examine the unique and additive associations of family functioning and parenting practices with adolescent disordered eating behaviors (i.e., dieting, unhealthy weight control behaviors, binge eating). Data from EAT (Eating and Activity in Teens) 2010, a population-based study assessing eating and activity among racially/ethnically and socio-economically diverse adolescents (n = 2,793; mean age = 14.4, SD = 2.0; age range = 11-19) was used. Logistic regression models were used to examine associations between adolescent dieting and disordered eating behaviors and family functioning and parenting variables, including interactions. All analyses controlled for demographics and body mass index. Higher family functioning, parent connection, and parental knowledge about child's whereabouts (e.g. who child is with, what they are doing, where they are at) were significantly associated with lower odds of engaging in dieting and disordered eating behaviors in adolescents, while parent psychological control was associated with greater odds of engaging in dieting and disordered eating behaviors. Although the majority of interactions were non-significant, parental psychological control moderated the protective relationship between family functioning and disordered eating behaviors in adolescent girls. Clinicians and health care providers may want to discuss the importance of balancing specific parenting behaviors, such as increasing parent knowledge about child whereabouts while decreasing psychological control in order to enhance the protective relationship between family functioning and disordered eating behaviors in adolescents.
Reiser, Oliver
2016-09-20
) the tendency for ligand exchange in Cu(I)L2 assemblies allows the efficient synthesis of heteroleptic Cu(I)LL' complexes to tune the steric and electronic properties and also might coordinate and thus activate substrates in the course of a reaction in addition to electron transfer. Moreover, new photoredox cycles have also been discovered beyond the visible-light-induced Cu(I)* → Cu(II) electron transfer that is arguably best known: examples of the Cu(II)* → Cu(I) and Cu(I)* → Cu(0) transitions have been realized, greatly broadening the potential for copper-based photoredox-catalyzed transformations. Finally, a number of organic transformations that are unique to Cu(I) photoredox catalysts have been discovered.
Recio, Sergio A; Iliescu, Adela F; Bergés, Germán D; Gil, Marta; de Brugada, Isabel
2016-04-01
It has been suggested that human perceptual learning could be explained in terms of a better memory encoding of the unique features during intermixed exposure. However, it is possible that a location bias could play a relevant role in explaining previous results of perceptual learning studies using complex visual stimuli. If this were the case, the only relevant feature would be the location, rather than the content, of the unique features. To further explore this possibility, we attempted to replicate the results of Lavis, Kadib, Mitchell, and Hall (2011, Experiment 2), which showed that additional exposure to the unique elements resulted in better discrimination than simple intermixed exposure. We manipulated the location of the unique elements during the additional exposure. In one experiment, they were located in the same position as that when presented together with the common element. In another experiment, the unique elements were located in the center of the screen, regardless of where they were located together with the common element. Our results showed that additional exposure only improved discrimination when the unique elements were presented in the same position as when they were presented together with the common element. The results reported here do not provide support for the explanation of the effects of additional exposure of the unique elements in terms of a better memory encoding and instead suggest an explanation in terms of location bias. PMID:26881901
Recio, Sergio A; Iliescu, Adela F; Bergés, Germán D; Gil, Marta; de Brugada, Isabel
2016-04-01
It has been suggested that human perceptual learning could be explained in terms of a better memory encoding of the unique features during intermixed exposure. However, it is possible that a location bias could play a relevant role in explaining previous results of perceptual learning studies using complex visual stimuli. If this were the case, the only relevant feature would be the location, rather than the content, of the unique features. To further explore this possibility, we attempted to replicate the results of Lavis, Kadib, Mitchell, and Hall (2011, Experiment 2), which showed that additional exposure to the unique elements resulted in better discrimination than simple intermixed exposure. We manipulated the location of the unique elements during the additional exposure. In one experiment, they were located in the same position as that when presented together with the common element. In another experiment, the unique elements were located in the center of the screen, regardless of where they were located together with the common element. Our results showed that additional exposure only improved discrimination when the unique elements were presented in the same position as when they were presented together with the common element. The results reported here do not provide support for the explanation of the effects of additional exposure of the unique elements in terms of a better memory encoding and instead suggest an explanation in terms of location bias.
Reiser, Oliver
2016-09-20
) the tendency for ligand exchange in Cu(I)L2 assemblies allows the efficient synthesis of heteroleptic Cu(I)LL' complexes to tune the steric and electronic properties and also might coordinate and thus activate substrates in the course of a reaction in addition to electron transfer. Moreover, new photoredox cycles have also been discovered beyond the visible-light-induced Cu(I)* → Cu(II) electron transfer that is arguably best known: examples of the Cu(II)* → Cu(I) and Cu(I)* → Cu(0) transitions have been realized, greatly broadening the potential for copper-based photoredox-catalyzed transformations. Finally, a number of organic transformations that are unique to Cu(I) photoredox catalysts have been discovered. PMID:27556932
NASA Technical Reports Server (NTRS)
Smalheer, C. V.
1973-01-01
The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.
Nuclear Material Variance Calculation
1995-01-01
MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet that significantly reduces the effort required to make the variance and covariance calculations needed to determine the detection sensitivity of a materials accounting system and loss of special nuclear material (SNM). The user is required to enter information into one of four data tables depending on the type of term in the materials balance (MB) equation. The four data tables correspond to input transfers, output transfers,more » and two types of inventory terms, one for nondestructive assay (NDA) measurements and one for measurements made by chemical analysis. Each data entry must contain an identification number and a short description, as well as values for the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements during an accounting period. The user must also specify the type of error model (additive or multiplicative) associated with each measurement, and possible correlations between transfer terms. Predefined spreadsheet macros are used to perform the variance and covariance calculations for each term based on the corresponding set of entries. MAVARIC has been used for sensitivity studies of chemical separation facilities, fuel processing and fabrication facilities, and gas centrifuge and laser isotope enrichment facilities.« less
Cosmology without cosmic variance
Bernstein, Gary M.; Cai, Yan -Chuan
2011-10-01
The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing themore » number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.« less
Cosmology without cosmic variance
Bernstein, Gary M.; Cai, Yan -Chuan
2011-10-01
The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing the number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.
Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A. E-mail: rix@mpia.de E-mail: janewman@pitt.edu
2011-04-20
Deep pencil beam surveys (<1 deg{sup 2}) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size {Delta}z. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , {Delta}z, and stellar mass m{sub *}. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates ({delta}{sigma}{sub v}/{sigma}{sub v}) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with {Delta}z = 0.5, the relative cosmic variance of galaxies with m{sub *}>10{sup 11} M{sub sun} is {approx}38%, while it is {approx}27% for GEMS and {approx}12% for COSMOS. For galaxies of m{sub *} {approx} 10{sup 10} M{sub sun}, the relative cosmic variance is {approx}19% for GOODS, {approx}13% for GEMS, and {approx}6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z
NASA Astrophysics Data System (ADS)
Moster, Benjamin P.; Somerville, Rachel S.; Newman, Jeffrey A.; Rix, Hans-Walter
2011-04-01
Deep pencil beam surveys (<1 deg2) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by "cosmic variance." This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift \\bar{z} and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, \\bar{z}, Δz, and stellar mass m *. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at \\bar{z}=2 and with Δz = 0.5, the relative cosmic variance of galaxies with m *>1011 M sun is ~38%, while it is ~27% for GEMS and ~12% for COSMOS. For galaxies of m * ~ 1010 M sun, the relative cosmic variance is ~19% for GOODS, ~13% for GEMS, and ~6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at \\bar{z}=2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic variance is
Nonlinear Epigenetic Variance: Review and Simulations
ERIC Educational Resources Information Center
Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.
2010-01-01
We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…
NASA Astrophysics Data System (ADS)
Chabuda, Krzysztof; Leroux, Ian D.; Demkowicz-Dobrzański, Rafał
2016-08-01
The instability of an atomic clock is characterized by the Allan variance, a measure widely used to describe the noise of frequency standards. We provide an explicit method to find the ultimate bound on the Allan variance of an atomic clock in the most general scenario where N atoms are prepared in an arbitrarily entangled state and arbitrary measurement and feedback are allowed, including those exploiting coherences between succeeding interrogation steps. While the method is rigorous and general, it becomes numerically challenging for large N and long averaging times.
Videotape Project in Child Variance. Final Report.
ERIC Educational Resources Information Center
Morse, William C.; Smith, Judith M.
The design, production, dissemination, and evaluation of a series of videotaped training packages designed to enable teachers, parents, and paraprofessionals to interpret child variance in light of personal and alternative perspectives of behavior are discussed. The goal of each package is to highlight unique contributions of different theoretical…
Variance Anisotropy in Kinetic Plasmas
NASA Astrophysics Data System (ADS)
Parashar, Tulasi N.; Oughton, Sean; Matthaeus, William H.; Wan, Minping
2016-06-01
Solar wind fluctuations admit well-documented anisotropies of the variance matrix, or polarization, related to the mean magnetic field direction. Typically, one finds a ratio of perpendicular variance to parallel variance of the order of 9:1 for the magnetic field. Here we study the question of whether a kinetic plasma spontaneously generates and sustains parallel variances when initiated with only perpendicular variance. We find that parallel variance grows and saturates at about 5% of the perpendicular variance in a few nonlinear times irrespective of the Reynolds number. For sufficiently large systems (Reynolds numbers) the variance approaches values consistent with the solar wind observations.
Conversations across Meaning Variance
ERIC Educational Resources Information Center
Cordero, Alberto
2013-01-01
Progressive interpretations of scientific theories have long been denounced as naive, because of the inescapability of meaning variance. The charge reportedly applies to recent realist moves that focus on theory-parts rather than whole theories. This paper considers the question of what "theory-parts" of epistemic significance (if any) relevantly…
ERIC Educational Resources Information Center
Braun, W. John
2012-01-01
The Analysis of Variance is often taught in introductory statistics courses, but it is not clear that students really understand the method. This is because the derivation of the test statistic and p-value requires a relatively sophisticated mathematical background which may not be well-remembered or understood. Thus, the essential concept behind…
Minimum variance geographic sampling
NASA Technical Reports Server (NTRS)
Terrell, G. R. (Principal Investigator)
1980-01-01
Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.
Nominal analysis of "variance".
Weiss, David J
2009-08-01
Nominal responses are the natural way for people to report actions or opinions. Because nominal responses do not generate numerical data, they have been underutilized in behavioral research. On those occasions in which nominal responses are elicited, the responses are customarily aggregated over people or trials so that large-sample statistics can be employed. A new analysis is proposed that directly associates differences among responses with particular sources in factorial designs. A pair of nominal responses either matches or does not; when responses do not match, they vary. That analogue to variance is incorporated in the nominal analysis of "variance" (NANOVA) procedure, wherein the proportions of matches associated with sources play the same role as do sums of squares in an ANOVA. The NANOVA table is structured like an ANOVA table. The significance levels of the N ratios formed by comparing proportions are determined by resampling. Fictitious behavioral examples featuring independent groups and repeated measures designs are presented. A Windows program for the analysis is available.
Estimating Modifying Effect of Age on Genetic and Environmental Variance Components in Twin Models.
He, Liang; Sillanpää, Mikko J; Silventoinen, Karri; Kaprio, Jaakko; Pitkäniemi, Janne
2016-04-01
Twin studies have been adopted for decades to disentangle the relative genetic and environmental contributions for a wide range of traits. However, heritability estimation based on the classical twin models does not take into account dynamic behavior of the variance components over age. Varying variance of the genetic component over age can imply the existence of gene-environment (G×E) interactions that general genome-wide association studies (GWAS) fail to capture, which may lead to the inconsistency of heritability estimates between twin design and GWAS. Existing parametricG×Einteraction models for twin studies are limited by assuming a linear or quadratic form of the variance curves with respect to a moderator that can, however, be overly restricted in reality. Here we propose spline-based approaches to explore the variance curves of the genetic and environmental components. We choose the additive genetic, common, and unique environmental variance components (ACE) model as the starting point. We treat the component variances as variance functions with respect to age modeled by B-splines or P-splines. We develop an empirical Bayes method to estimate the variance curves together with their confidence bands and provide an R package for public use. Our simulations demonstrate that the proposed methods accurately capture dynamic behavior of the component variances in terms of mean square errors with a data set of >10,000 twin pairs. Using the proposed methods as an alternative and major extension to the classical twin models, our analyses with a large-scale Finnish twin data set (19,510 MZ twins and 27,312 DZ same-sex twins) discover that the variances of the A, C, and E components for body mass index (BMI) change substantially across life span in different patterns and the heritability of BMI drops to ∼50% after middle age. The results further indicate that the decline of heritability is due to increasing unique environmental variance, which provides more
Systems Engineering Programmatic Estimation Using Technology Variance
NASA Technical Reports Server (NTRS)
Mog, Robert A.
2000-01-01
Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed oil the subsystems and components comprising the system of interest. Technological "returns" and "variation" parameters, are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.
Systems Engineering Programmatic Estimation Using Technology Variance
NASA Technical Reports Server (NTRS)
Mog, Robert A.
2000-01-01
Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed on the subsystems and components comprising the system of interest. Technological "return" and "variation" parameters are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.
Budget variance analysis using RVUs.
Berlin, M F; Budzynski, M R
1998-01-01
This article details the use of the variance analysis as management tool to evaluate the financial health of the practice. A common financial tool for administrators has been a simple calculation measuring the difference between actual financials vs. budget financials. Standard cost accounting provides a methodology known as variance analysis to better understand the actual vs. budgeted financial streams. The standard variance analysis has been modified by applying relative value units (RVUs) as standards for the practice. PMID:10387247
Understanding gender variance in children and adolescents.
Simons, Lisa K; Leibowitz, Scott F; Hidalgo, Marco A
2014-06-01
Gender variance is an umbrella term used to describe gender identity, expression, or behavior that falls outside of culturally defined norms associated with a specific gender. In recent years, growing media coverage has heightened public awareness about gender variance in childhood and adolescence, and an increasing number of referrals to clinics specializing in care for gender-variant youth have been reported in the United States. Gender-variant expression, behavior, and identity may present in childhood and adolescence in a number of ways, and youth with gender variance have unique health needs. For those experiencing gender dysphoria, or distress encountered by the discordance between biological sex and gender identity, puberty is often an exceptionally challenging time. Pediatric primary care providers may be families' first resource for education and support, and they play a critical role in supporting the health of youth with gender variance by screening for psychosocial problems and health risks, referring for gender-specific mental health and medical care, and providing ongoing advocacy and support. PMID:24972420
Understanding gender variance in children and adolescents.
Simons, Lisa K; Leibowitz, Scott F; Hidalgo, Marco A
2014-06-01
Gender variance is an umbrella term used to describe gender identity, expression, or behavior that falls outside of culturally defined norms associated with a specific gender. In recent years, growing media coverage has heightened public awareness about gender variance in childhood and adolescence, and an increasing number of referrals to clinics specializing in care for gender-variant youth have been reported in the United States. Gender-variant expression, behavior, and identity may present in childhood and adolescence in a number of ways, and youth with gender variance have unique health needs. For those experiencing gender dysphoria, or distress encountered by the discordance between biological sex and gender identity, puberty is often an exceptionally challenging time. Pediatric primary care providers may be families' first resource for education and support, and they play a critical role in supporting the health of youth with gender variance by screening for psychosocial problems and health risks, referring for gender-specific mental health and medical care, and providing ongoing advocacy and support.
Variance estimation for nucleotide substitution models.
Chen, Weishan; Wang, Hsiuying
2015-09-01
The current variance estimators for most evolutionary models were derived when a nucleotide substitution number estimator was approximated with a simple first order Taylor expansion. In this study, we derive three variance estimators for the F81, F84, HKY85 and TN93 nucleotide substitution models, respectively. They are obtained using the second order Taylor expansion of the substitution number estimator, the first order Taylor expansion of a squared deviation and the second order Taylor expansion of a squared deviation, respectively. These variance estimators are compared with the existing variance estimator in terms of a simulation study. It shows that the variance estimator, which is derived using the second order Taylor expansion of a squared deviation, is more accurate than the other three estimators. In addition, we also compare these estimators with an estimator derived by the bootstrap method. The simulation shows that the performance of this bootstrap estimator is similar to the estimator derived by the second order Taylor expansion of a squared deviation. Since the latter one has an explicit form, it is more efficient than the bootstrap estimator.
Variant evolutionary trees under phenotypic variance.
Nishimura, Kinya; Isoda, Yutaka
2004-01-01
Evolutionary branching, which is a coevolutionary phenomenon of the development of two or more distinctive traits from a single trait in a population, is the issue of recent studies on adaptive dynamics. In previous studies, it was revealed that trait variance is a minimum requirement for evolutionary branching, and that it does not play an important role in the formation of an evolutionary pattern of branching. Here we demonstrate that the trait evolution exhibits various evolutionary branching paths starting from an identical initial trait to different evolutional terminus traits as determined by only changing the assumption of trait variance. The key feature of this phenomenon is the topological configuration of equilibria and the initial point in the manifold of dimorphism from which dimorphic branches develop. This suggests that the existing monomorphic or polymorphic set in a population is not an unique inevitable consequence of an identical initial phenotype.
Sampling Errors of Variance Components.
ERIC Educational Resources Information Center
Sanders, Piet F.
A study on sampling errors of variance components was conducted within the framework of generalizability theory by P. L. Smith (1978). The study used an intuitive approach for solving the problem of how to allocate the number of conditions to different facets in order to produce the most stable estimate of the universe score variance. Optimization…
Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.
Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S
2016-04-01
Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity. PMID:26995641
Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.
Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S
2016-04-01
Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity.
A proxy for variance in dense matching over homogeneous terrain
NASA Astrophysics Data System (ADS)
Altena, Bas; Cockx, Liesbet; Goedemé, Toon
2014-05-01
Automation in photogrammetry and avionics have brought highly autonomous UAV mapping solutions on the market. These systems have great potential for geophysical research, due to their mobility and simplicity of work. Flight planning can be done on site and orientation parameters are estimated automatically. However, one major drawback is still present: if contrast is lacking, stereoscopy fails. Consequently, topographic information cannot be obtained precisely through photogrammetry for areas with low contrast. Even though more robustness is added in the estimation through multi-view geometry, a precise product is still lacking. For the greater part, interpolation is applied over these regions, where the estimation is constrained by uniqueness, its epipolar line and smoothness. Consequently, digital surface models are generated with an estimate of the topography, without holes but also without an indication of its variance. Every dense matching algorithm is based on a similarity measure. Our methodology uses this property to support the idea that if only noise is present, no correspondence can be detected. Therefore, the noise level is estimated in respect to the intensity signal of the topography (SNR) and this ratio serves as a quality indicator for the automatically generated product. To demonstrate this variance indicator, two different case studies were elaborated. The first study is situated at an open sand mine near the village of Kiezegem, Belgium. Two different UAV systems flew over the site. One system had automatic intensity regulation, and resulted in low contrast over the sandy interior of the mine. That dataset was used to identify the weak estimations of the topography and was compared with the data from the other UAV flight. In the second study a flight campaign with the X100 system was conducted along the coast near Wenduine, Belgium. The obtained images were processed through structure-from-motion software. Although the beach had a very low
Integrating Variances into an Analytical Database
NASA Technical Reports Server (NTRS)
Sanchez, Carlos
2010-01-01
For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.
A Computer Program to Determine Reliability Using Analysis of Variance
ERIC Educational Resources Information Center
Burns, Edward
1976-01-01
A computer program, written in Fortran IV, is described which assesses reliability by using analysis of variance. It produces a complete analysis of variance table in addition to reliability coefficients for unadjusted and adjusted data as well as the intraclass correlation for m subjects and n items. (Author)
Estimation of Variance Components of Quantitative Traits in Inbred Populations
Abney, Mark; McPeek, Mary Sara; Ober, Carole
2000-01-01
Summary Use of variance-component estimation for mapping of quantitative-trait loci in humans is a subject of great current interest. When only trait values, not genotypic information, are considered, variance-component estimation can also be used to estimate heritability of a quantitative trait. Inbred pedigrees present special challenges for variance-component estimation. First, there are more variance components to be estimated in the inbred case, even for a relatively simple model including additive, dominance, and environmental effects. Second, more identity coefficients need to be calculated from an inbred pedigree in order to perform the estimation, and these are computationally more difficult to obtain in the inbred than in the outbred case. As a result, inbreeding effects have generally been ignored in practice. We describe here the calculation of identity coefficients and estimation of variance components of quantitative traits in large inbred pedigrees, using the example of HDL in the Hutterites. We use a multivariate normal model for the genetic effects, extending the central-limit theorem of Lange to allow for both inbreeding and dominance under the assumptions of our variance-component model. We use simulated examples to give an indication of under what conditions one has the power to detect the additional variance components and to examine their impact on variance-component estimation. We discuss the implications for mapping and heritability estimation by use of variance components in inbred populations. PMID:10677322
Some characterizations of unique extremality
NASA Astrophysics Data System (ADS)
Yao, Guowu
2008-07-01
In this paper, it is shown that some necessary characteristic conditions for unique extremality obtained by Zhu and Chen are also sufficient and some sufficient ones by them actually imply that the uniquely extremal Beltrami differentials have a constant modulus. In addition, some local properties of uniquely extremal Beltrami differentials are given.
The Variance Reaction Time Model
ERIC Educational Resources Information Center
Sikstrom, Sverker
2004-01-01
The variance reaction time model (VRTM) is proposed to account for various recognition data on reaction time, the mirror effect, receiver-operating-characteristic (ROC) curves, etc. The model is based on simple and plausible assumptions within a neural network: VRTM is a two layer neural network where one layer represents items and one layer…
Analysis of Variance: Variably Complex
ERIC Educational Resources Information Center
Drummond, Gordon B.; Vowler, Sarah L.
2012-01-01
These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution of…
Relating the Hadamard Variance to MCS Kalman Filter Clock Estimation
NASA Technical Reports Server (NTRS)
Hutsell, Steven T.
1996-01-01
The Global Positioning System (GPS) Master Control Station (MCS) currently makes significant use of the Allan Variance. This two-sample variance equation has proven excellent as a handy, understandable tool, both for time domain analysis of GPS cesium frequency standards, and for fine tuning the MCS's state estimation of these atomic clocks. The Allan Variance does not explicitly converge for the nose types of alpha less than or equal to minus 3 and can be greatly affected by frequency drift. Because GPS rubidium frequency standards exhibit non-trivial aging and aging noise characteristics, the basic Allan Variance analysis must be augmented in order to (a) compensate for a dynamic frequency drift, and (b) characterize two additional noise types, specifically alpha = minus 3, and alpha = minus 4. As the GPS program progresses, we will utilize a larger percentage of rubidium frequency standards than ever before. Hence, GPS rubidium clock characterization will require more attention than ever before. The three sample variance, commonly referred to as a renormalized Hadamard Variance, is unaffected by linear frequency drift, converges for alpha is greater than minus 5, and thus has utility for modeling noise in GPS rubidium frequency standards. This paper demonstrates the potential of Hadamard Variance analysis in GPS operations, and presents an equation that relates the Hadamard Variance to the MCS's Kalman filter process noises.
Variance and covariance estimates for weaning weight of Senepol cattle.
Wright, D W; Johnson, Z B; Brown, C J; Wildeus, S
1991-10-01
Variance and covariance components were estimated for weaning weight from Senepol field data for use in the reduced animal model for a maternally influenced trait. The 4,634 weaning records were used to evaluate 113 sires and 1,406 dams on the island of St. Croix. Estimates of direct additive genetic variance (sigma 2A), maternal additive genetic variance (sigma 2M), covariance between direct and maternal additive genetic effects (sigma AM), permanent maternal environmental variance (sigma 2PE), and residual variance (sigma 2 epsilon) were calculated by equating variances estimated from a sire-dam model and a sire-maternal grandsire model, with and without the inverse of the numerator relationship matrix (A-1), to their expectations. Estimates were sigma 2A, 139.05 and 138.14 kg2; sigma 2M, 307.04 and 288.90 kg2; sigma AM, -117.57 and -103.76 kg2; sigma 2PE, -258.35 and -243.40 kg2; and sigma 2 epsilon, 588.18 and 577.72 kg2 with and without A-1, respectively. Heritability estimates for direct additive (h2A) were .211 and .210 with and without A-1, respectively. Heritability estimates for maternal additive (h2M) were .47 and .44 with and without A-1, respectively. Correlations between direct and maternal (IAM) effects were -.57 and -.52 with and without A-1, respectively. PMID:1778806
Variance decomposition in stochastic simulators
NASA Astrophysics Data System (ADS)
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators.
Le Maître, O P; Knio, O M; Moraes, A
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Neutrino mass without cosmic variance
NASA Astrophysics Data System (ADS)
LoVerde, Marilena
2016-05-01
Measuring the absolute scale of the neutrino masses is one of the most exciting opportunities available with near-term cosmological data sets. Two quantities that are sensitive to neutrino mass, scale-dependent halo bias b (k ) and the linear growth parameter f (k ) inferred from redshift-space distortions, can be measured without cosmic variance. Unlike the amplitude of the matter power spectrum, which always has a finite error, the error on b (k ) and f (k ) continues to decrease as the number density of tracers increases. This paper presents forecasts for statistics of galaxy and lensing fields that are sensitive to neutrino mass via b (k ) and f (k ). The constraints on neutrino mass from the auto- and cross-power spectra of spectroscopic and photometric galaxy samples are weakened by scale-dependent bias unless a very high density of tracers is available. In the high-density limit, using multiple tracers allows cosmic variance to be beaten, and the forecasted errors on neutrino mass shrink dramatically. In practice, beating the cosmic-variance errors on neutrino mass with b (k ) will be a challenge, but this signal is nevertheless a new probe of neutrino effects on structure formation that is interesting in its own right.
Variance Components: Partialled vs. Common.
ERIC Educational Resources Information Center
Curtis, Ervin W.
1985-01-01
A new approach to partialling components is used. Like conventional partialling, this approach orthogonalizes variables by partitioning the scores or observations. Unlike conventional partialling, it yields a common component and two unique components. (Author/GDC)
Variance analysis. Part I, Extending flexible budget variance analysis to acuity.
Finkler, S A
1991-01-01
The author reviews the concepts of flexible budget variance analysis, including the price, quantity, and volume variances generated by that technique. He also introduces the concept of acuity variance and provides direction on how such a variance measure can be calculated. Part II in this two-part series on variance analysis will look at how personal computers can be useful in the variance analysis process. PMID:1870002
Maximization of total genetic variance in breed conservation programmes.
Cervantes, I; Meuwissen, T H E
2011-12-01
The preservation of the maximum genetic diversity in a population is one of the main objectives within a breed conservation programme. We applied the maximum variance total (MVT) method to a unique population in order to maximize the total genetic variance. The function maximization was performed by the annealing algorithm. We have selected the parents and the mating scheme at the same time simply maximizing the total genetic variance (a mate selection problem). The scenario was compared with a scenario of full-sib lines, a MVT scenario with a rate of inbreeding restriction, and with a minimum coancestry selection scenario. The MVT method produces sublines in a population attaining a similar scheme as the full-sib sublining that agrees with other authors that the maximum genetic diversity in a population (the lowest overall coancestry) is attained in the long term by subdividing it in as many isolated groups as possible. The application of a restriction on the rate of inbreeding jointly with the MVT method avoids the consequences of inbreeding depression and maintains the effective size at an acceptable minimum. The scenario of minimum coancestry selection gave higher effective size values, but a lower total genetic variance. A maximization of the total genetic variance ensures more genetic variation for extreme traits, which could be useful in case the population needs to adapt to a new environment/production system.
Warped functional analysis of variance.
Gervini, Daniel; Carter, Patrick A
2014-09-01
This article presents an Analysis of Variance model for functional data that explicitly incorporates phase variability through a time-warping component, allowing for a unified approach to estimation and inference in presence of amplitude and time variability. The focus is on single-random-factor models but the approach can be easily generalized to more complex ANOVA models. The behavior of the estimators is studied by simulation, and an application to the analysis of growth curves of flour beetles is presented. Although the model assumes a smooth latent process behind the observed trajectories, smootheness of the observed data is not required; the method can be applied to irregular time grids, which are common in longitudinal studies.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 29 Labor 7 2011-07-01 2011-07-01 false Variances. 1920.2 Section 1920.2 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR (CONTINUED...' COMPENSATION ACT § 1920.2 Variances. (a) Variances from standards in parts 1915 through 1918 of this...
Estimation of Variance Components Using Computer Packages.
ERIC Educational Resources Information Center
Chastain, Robert L.; Willson, Victor L.
Generalizability theory is based upon analysis of variance (ANOVA) and requires estimation of variance components for the ANOVA design under consideration in order to compute either G (Generalizability) or D (Decision) coefficients. Estimation of variance components has a number of alternative methods available using SAS, BMDP, and ad hoc…
10 CFR 851.31 - Variance process.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Variance process. 851.31 Section 851.31 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application..., practices, means, methods, operations, or processes used or proposed to be used by the contractor; and...
Increasing selection response by Bayesian modeling of heterogeneous environmental variances
Technology Transfer Automated Retrieval System (TEKTRAN)
Heterogeneity of environmental variance among genotypes reduces selection response because genotypes with higher variance are more likely to be selected than low-variance genotypes. Modeling heterogeneous variances to obtain weighted means corrected for heterogeneous variances is difficult in likel...
Restricted sample variance reduces generalizability.
Lakes, Kimberley D
2013-06-01
One factor that affects the reliability of observed scores is restriction of range on the construct measured for a particular group of study participants. This study illustrates how researchers can use generalizability theory to evaluate the impact of restriction of range in particular sample characteristics on the generalizability of test scores and to estimate how changes in measurement design could improve the generalizability of the test scores. An observer-rated measure of child self-regulation (Response to Challenge Scale; Lakes, 2011) is used to examine scores for 198 children (Grades K through 5) within the generalizability theory (GT) framework. The generalizability of ratings within relatively developmentally homogeneous samples is examined and illustrates the effect of reduced variance among ratees on generalizability. Forecasts for g coefficients of various D study designs demonstrate how higher generalizability could be achieved by increasing the number of raters or items. In summary, the research presented illustrates the importance of and procedures for evaluating the generalizability of a set of scores in a particular research context. PMID:23205627
Simulation testing of unbiasedness of variance estimators
Link, W.A.
1993-01-01
In this article I address the evaluation of estimators of variance for parameter estimates. Given an unbiased estimator X of a parameter 0, and an estimator V of the variance of X, how does one test (via simulation) whether V is an unbiased estimator of the variance of X? The derivation of the test statistic illustrates the need for care in substituting consistent estimators for unknown parameters.
Perspective projection for variance pose face recognition from camera calibration
NASA Astrophysics Data System (ADS)
Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.
2016-04-01
Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.
Code of Federal Regulations, 2011 CFR
2011-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2010 CFR
2010-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Multireader multicase variance analysis for binary data.
Gallas, Brandon D; Pennello, Gene A; Myers, Kyle J
2007-12-01
Multireader multicase (MRMC) variance analysis has become widely utilized to analyze observer studies for which the summary measure is the area under the receiver operating characteristic (ROC) curve. We extend MRMC variance analysis to binary data and also to generic study designs in which every reader may not interpret every case. A subset of the fundamental moments central to MRMC variance analysis of the area under the ROC curve (AUC) is found to be required. Through multiple simulation configurations, we compare our unbiased variance estimates to naïve estimates across a range of study designs, average percent correct, and numbers of readers and cases.
Code of Federal Regulations, 2014 CFR
2014-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2013 CFR
2013-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2012 CFR
2012-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
The return of the variance: intraspecific variability in community ecology.
Violle, Cyrille; Enquist, Brian J; McGill, Brian J; Jiang, Lin; Albert, Cécile H; Hulshof, Catherine; Jung, Vincent; Messier, Julie
2012-04-01
Despite being recognized as a promoter of diversity and a condition for local coexistence decades ago, the importance of intraspecific variance has been neglected over time in community ecology. Recently, there has been a new emphasis on intraspecific variability. Indeed, recent developments in trait-based community ecology have underlined the need to integrate variation at both the intraspecific as well as interspecific level. We introduce new T-statistics ('T' for trait), based on the comparison of intraspecific and interspecific variances of functional traits across organizational levels, to operationally incorporate intraspecific variability into community ecology theory. We show that a focus on the distribution of traits at local and regional scales combined with original analytical tools can provide unique insights into the primary forces structuring communities.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Variances. 1010.4 Section 1010.4 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) RADIOLOGICAL HEALTH... and Radiological Health, Food and Drug Administration, may grant a variance from one or...
Variance Design and Air Pollution Control
ERIC Educational Resources Information Center
Ferrar, Terry A.; Brownstein, Alan B.
1975-01-01
Air pollution control authorities were forced to relax air quality standards during the winter of 1972 by granting variances. This paper examines the institutional characteristics of these variance policies from an economic incentive standpoint, sets up desirable structural criteria for institutional design and arrives at policy guidelines for…
On Some Representations of Sample Variance
ERIC Educational Resources Information Center
Joarder, Anwar H.
2002-01-01
The usual formula for variance depending on rounding off the sample mean lacks precision, especially when computer programs are used for the calculation. The well-known simplification of the total sums of squares does not always give benefit. Since the variance of two observations is easily calculated without the use of a sample mean, and the…
Save money by understanding variance and tolerancing.
Stuart, K
2007-01-01
Manufacturing processes are inherently variable, which results in component and assembly variance. Unless process capability, variance and tolerancing are fully understood, incorrect design tolerances may be applied, which will lead to more expensive tooling, inflated production costs, high reject rates, product recalls and excessive warranty costs. A methodology is described for correctly allocating tolerances and performing appropriate analyses.
10 CFR 851.31 - Variance process.
Code of Federal Regulations, 2012 CFR
2012-01-01
... OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application... that the contractor has taken to inform the affected workers of the application, which must include... application and specifying where a copy may be examined at the place or places where notices to workers...
10 CFR 851.31 - Variance process.
Code of Federal Regulations, 2013 CFR
2013-01-01
... OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application... that the contractor has taken to inform the affected workers of the application, which must include... application and specifying where a copy may be examined at the place or places where notices to workers...
10 CFR 851.31 - Variance process.
Code of Federal Regulations, 2011 CFR
2011-01-01
... OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application... that the contractor has taken to inform the affected workers of the application, which must include... application and specifying where a copy may be examined at the place or places where notices to workers...
Genetic Variance for Body Size in a Natural Population of Drosophila Buzzatii
Ruiz, A.; Santos, M.; Barbadilla, A.; Quezada-Diaz, J. E.; Hasson, E.; Fontdevila, A.
1991-01-01
Previous work has shown thorax length to be under directional selection in the Drosophila buzzatii population of Carboneras. In order to predict the genetic consequences of natural selection, genetic variation for this trait was investigated in two ways. First, narrow sense heritability was estimated in the laboratory F(2) generation of a sample of wild flies by means of the offspring-parent regression. A relatively high value, 0.59, was obtained. Because the phenotypic variance of wild flies was 7-9 times that of the flies raised in the laboratory, ``natural'' heritability may be estimated as one-seventh to one-ninth that value. Second, the contribution of the second and fourth chromosomes, which are polymorphic for paracentric inversions, to the genetic variance of thorax length was estimated in the field and in the laboratory. This was done with the assistance of a simple genetic model which shows that the variance among chromosome arrangements and the variance among karyotypes provide minimum estimates of the chromosome's contribution to the additive and genetic variances of the triat, respectively. In males raised under optimal conditions in the laboratory, the variance among second-chromosome karyotypes accounted for 11.43% of the total phenotypic variance and most of this variance was additive; by contrast, the contribution of the fourth chromosome was nonsignificant. The variance among second-chromosome karyotypes accounted for 1.56-1.78% of the total phenotypic variance in wild males and was nonsignificant in wild females. The variance among fourth chromosome karyotypes accounted for 0.14-3.48% of the total phenotypic variance in wild flies. At both chromosomes, the proportion of additive variance was higher in mating flies than in nonmating flies. PMID:1916242
Portfolio optimization with mean-variance model
NASA Astrophysics Data System (ADS)
Hoe, Lam Weng; Siew, Lam Weng
2016-06-01
Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.
42 CFR 456.525 - Request for renewal of variance.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time... variance to the Administrator at least 30 days before the variance expires. (b) The renewal request...
Portfolio optimization using median-variance approach
NASA Astrophysics Data System (ADS)
Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli
2013-04-01
Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.
Automatic variance analysis of multistage care pathways.
Li, Xiang; Liu, Haifeng; Zhang, Shilei; Mei, Jing; Xie, Guotong; Yu, Yiqin; Li, Jing; Lakshmanan, Geetika T
2014-01-01
A care pathway (CP) is a standardized process that consists of multiple care stages, clinical activities and their relations, aimed at ensuring and enhancing the quality of care. However, actual care may deviate from the planned CP, and analysis of these deviations can help clinicians refine the CP and reduce medical errors. In this paper, we propose a CP variance analysis method to automatically identify the deviations between actual patient traces in electronic medical records (EMR) and a multistage CP. As the care stage information is usually unavailable in EMR, we first align every trace with the CP using a hidden Markov model. From the aligned traces, we report three types of deviations for every care stage: additional activities, absent activities and violated constraints, which are identified by using the techniques of temporal logic and binomial tests. The method has been applied to a CP for the management of congestive heart failure and real world EMR, providing meaningful evidence for the further improvement of care quality. PMID:25160280
Component Processes in Reading: Shared and Unique Variance in Serial and Isolated Naming Speed
ERIC Educational Resources Information Center
Logan, Jessica A. R.; Schatschneider, Christopher
2014-01-01
Reading ability is comprised of several component processes. In particular, the connection between the visual and verbal systems has been demonstrated to play an important role in the reading process. The present study provides a review of the existing literature on the visual verbal connection as measured by two tasks, rapid serial naming and…
Commonality Analysis: A Method of Analyzing Unique and Common Variance Proportions.
ERIC Educational Resources Information Center
Kroff, Michael W.
This paper considers the use of commonality analysis as an effective tool for analyzing relationships between variables in multiple regression or canonical correlational analysis (CCA). The merits of commonality analysis are discussed and the procedure for running commonality analysis is summarized as a four-step process. A heuristic example is…
ERIC Educational Resources Information Center
Brotheridge, Celeste M.; Power, Jacqueline L.
2008-01-01
Purpose: This study seeks to examine the extent to which the use of career center services results in the significant incremental prediction of career outcomes beyond its established predictors. Design/methodology/approach: The authors survey the clients of a public agency's career center and use hierarchical multiple regressions in order to…
Nonorthogonal Analysis of Variance Programs: An Evaluation.
ERIC Educational Resources Information Center
Hosking, James D.; Hamer, Robert M.
1979-01-01
Six computer programs for four methods of nonorthogonal analysis of variance are compared for capabilities, accuracy, cost, transportability, quality of documentation, associated computational capabilities, and ease of use: OSIRIS; SAS; SPSS; MANOVA; BMDP2V; and MULTIVARIANCE. (CTM)
40 CFR 142.41 - Variance request.
Code of Federal Regulations, 2010 CFR
2010-07-01
... and evidence of the best available treatment technology and techniques. (2) Economic and legal factors... water in the case of an excessive rise in the contaminant level for which the variance is requested....
Code of Federal Regulations, 2011 CFR
2011-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2010 CFR
2010-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
7 CFR 205.290 - Temporary variances.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) ORGANIC FOODS PRODUCTION ACT... notify each production or handling operation it certifies to which the temporary variance applies....
Reducing variance in batch partitioning measurements
Mariner, Paul E.
2010-08-11
The partitioning experiment is commonly performed with little or no attention to reducing measurement variance. Batch test procedures such as those used to measure K{sub d} values (e.g., ASTM D 4646 and EPA402 -R-99-004A) do not explain how to evaluate measurement uncertainty nor how to minimize measurement variance. In fact, ASTM D 4646 prescribes a sorbent:water ratio that prevents variance minimization. Consequently, the variance of a set of partitioning measurements can be extreme and even absurd. Such data sets, which are commonplace, hamper probabilistic modeling efforts. An error-savvy design requires adjustment of the solution:sorbent ratio so that approximately half of the sorbate partitions to the sorbent. Results of Monte Carlo simulations indicate that this simple step can markedly improve the precision and statistical characterization of partitioning uncertainty.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Variances. 307.22 Section 307.22 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC... Federal, State and local law....
Code of Federal Regulations, 2013 CFR
2013-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2012 CFR
2012-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2014 CFR
2014-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Variance Components in Discrete Force Production Tasks
SKM, Varadhan; Zatsiorsky, Vladimir M.; Latash, Mark L.
2010-01-01
The study addresses the relationships between task parameters and two components of variance, “good” and “bad”, during multi-finger accurate force production. The variance components are defined in the space of commands to the fingers (finger modes) and refer to variance that does (“bad”) and does not (“good”) affect total force. Based on an earlier study of cyclic force production, we hypothesized that speeding-up an accurate force production task would be accompanied by a drop in the regression coefficient linking the “bad” variance and force rate such that variance of the total force remains largely unaffected. We also explored changes in parameters of anticipatory synergy adjustments with speeding-up the task. The subjects produced accurate ramps of total force over different times and in different directions (force-up and force-down) while pressing with the four fingers of the right hand on individual force sensors. The two variance components were quantified, and their normalized difference was used as an index of a total force stabilizing synergy. “Good” variance scaled linearly with force magnitude and did not depend on force rate. “Bad” variance scaled linearly with force rate within each task, and the scaling coefficient did not change across tasks with different ramp times. As a result, a drop in force ramp time was associated with an increase in total force variance, unlike the results of the study of cyclic tasks. The synergy index dropped 100-200 ms prior to the first visible signs of force change. The timing and magnitude of these anticipatory synergy adjustments did not depend on the ramp time. Analysis of the data within an earlier model has shown adjustments in the variance of a timing parameter, although these adjustments were not as pronounced as in the earlier study of cyclic force production. Overall, we observed qualitative differences between the discrete and cyclic force production tasks: Speeding-up the cyclic
Heterogeneity of variances for carcass traits by percentage Brahman inheritance.
Crews, D H; Franke, D E
1998-07-01
Heterogeneity of carcass trait variances due to level of Brahman inheritance was investigated using records from straightbred and crossbred steers produced from 1970 to 1988 (n = 1,530). Angus, Brahman, Charolais, and Hereford sires were mated to straightbred and crossbred cows to produce straightbred, F1, back-cross, three-breed cross, and two-, three-, and four-breed rotational crossbred steers in four non-overlapping generations. At weaning (mean age = 220 d), steers were randomly assigned within breed group directly to the feedlot for 200 d, or to a backgrounding and stocker phase before feeding. Stocker steers were fed from 70 to 100 d in generations 1 and 2 and from 60 to 120 d in generations 3 and 4. Carcass traits included hot carcass weight, subcutaneous fat thickness and longissimus muscle area at the 12-13th rib interface, carcass weight-adjusted longissimus muscle area, USDA yield grade, estimated total lean yield, marbling score, and Warner-Bratzler shear force. Steers were classified as either high Brahman (50 to 100% Brahman), moderate Brahman (25 to 49% Brahman), or low Brahman (0 to 24% Brahman) inheritance. Two types of animal models were fit with regard to level of Brahman inheritance. One model assumed similar variances between pairs of Brahman inheritance groups, and the second model assumed different variances between pairs of Brahman inheritance groups. Fixed sources of variation in both models included direct and maternal additive and nonadditive breed effects, year of birth, and slaughter age. Variances were estimated using derivative free REML procedures. Likelihood ratio tests were used to compare models. The model accounting for heterogeneous variances had a greater likelihood (P < .001) than the model assuming homogeneous variances for hot carcass weight, longissimus muscle area, weight-adjusted longissimus muscle area, total lean yield, and Warner-Bratzler shear force, indicating improved fit with percentage Brahman inheritance
Heterogeneity of variances for carcass traits by percentage Brahman inheritance.
Crews, D H; Franke, D E
1998-07-01
Heterogeneity of carcass trait variances due to level of Brahman inheritance was investigated using records from straightbred and crossbred steers produced from 1970 to 1988 (n = 1,530). Angus, Brahman, Charolais, and Hereford sires were mated to straightbred and crossbred cows to produce straightbred, F1, back-cross, three-breed cross, and two-, three-, and four-breed rotational crossbred steers in four non-overlapping generations. At weaning (mean age = 220 d), steers were randomly assigned within breed group directly to the feedlot for 200 d, or to a backgrounding and stocker phase before feeding. Stocker steers were fed from 70 to 100 d in generations 1 and 2 and from 60 to 120 d in generations 3 and 4. Carcass traits included hot carcass weight, subcutaneous fat thickness and longissimus muscle area at the 12-13th rib interface, carcass weight-adjusted longissimus muscle area, USDA yield grade, estimated total lean yield, marbling score, and Warner-Bratzler shear force. Steers were classified as either high Brahman (50 to 100% Brahman), moderate Brahman (25 to 49% Brahman), or low Brahman (0 to 24% Brahman) inheritance. Two types of animal models were fit with regard to level of Brahman inheritance. One model assumed similar variances between pairs of Brahman inheritance groups, and the second model assumed different variances between pairs of Brahman inheritance groups. Fixed sources of variation in both models included direct and maternal additive and nonadditive breed effects, year of birth, and slaughter age. Variances were estimated using derivative free REML procedures. Likelihood ratio tests were used to compare models. The model accounting for heterogeneous variances had a greater likelihood (P < .001) than the model assuming homogeneous variances for hot carcass weight, longissimus muscle area, weight-adjusted longissimus muscle area, total lean yield, and Warner-Bratzler shear force, indicating improved fit with percentage Brahman inheritance
Variational bayesian method of estimating variance components.
Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi
2016-07-01
We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling.
Analysis and application of minimum variance discrete time system identification
NASA Technical Reports Server (NTRS)
Kaufman, H.; Kotob, S.
1975-01-01
An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.
Functional analysis of variance for association studies.
Vsevolozhskaya, Olga A; Zaykin, Dmitri V; Greenwood, Mark C; Wei, Changshuai; Lu, Qing
2014-01-01
While progress has been made in identifying common genetic variants associated with human diseases, for most of common complex diseases, the identified genetic variants only account for a small proportion of heritability. Challenges remain in finding additional unknown genetic variants predisposing to complex diseases. With the advance in next-generation sequencing technologies, sequencing studies have become commonplace in genetic research. The ongoing exome-sequencing and whole-genome-sequencing studies generate a massive amount of sequencing variants and allow researchers to comprehensively investigate their role in human diseases. The discovery of new disease-associated variants can be enhanced by utilizing powerful and computationally efficient statistical methods. In this paper, we propose a functional analysis of variance (FANOVA) method for testing an association of sequence variants in a genomic region with a qualitative trait. The FANOVA has a number of advantages: (1) it tests for a joint effect of gene variants, including both common and rare; (2) it fully utilizes linkage disequilibrium and genetic position information; and (3) allows for either protective or risk-increasing causal variants. Through simulations, we show that FANOVA outperform two popularly used methods - SKAT and a previously proposed method based on functional linear models (FLM), - especially if a sample size of a study is small and/or sequence variants have low to moderate effects. We conduct an empirical study by applying three methods (FANOVA, SKAT and FLM) to sequencing data from Dallas Heart Study. While SKAT and FLM respectively detected ANGPTL 4 and ANGPTL 3 associated with obesity, FANOVA was able to identify both genes associated with obesity.
Functional analysis of variance for association studies.
Vsevolozhskaya, Olga A; Zaykin, Dmitri V; Greenwood, Mark C; Wei, Changshuai; Lu, Qing
2014-01-01
While progress has been made in identifying common genetic variants associated with human diseases, for most of common complex diseases, the identified genetic variants only account for a small proportion of heritability. Challenges remain in finding additional unknown genetic variants predisposing to complex diseases. With the advance in next-generation sequencing technologies, sequencing studies have become commonplace in genetic research. The ongoing exome-sequencing and whole-genome-sequencing studies generate a massive amount of sequencing variants and allow researchers to comprehensively investigate their role in human diseases. The discovery of new disease-associated variants can be enhanced by utilizing powerful and computationally efficient statistical methods. In this paper, we propose a functional analysis of variance (FANOVA) method for testing an association of sequence variants in a genomic region with a qualitative trait. The FANOVA has a number of advantages: (1) it tests for a joint effect of gene variants, including both common and rare; (2) it fully utilizes linkage disequilibrium and genetic position information; and (3) allows for either protective or risk-increasing causal variants. Through simulations, we show that FANOVA outperform two popularly used methods - SKAT and a previously proposed method based on functional linear models (FLM), - especially if a sample size of a study is small and/or sequence variants have low to moderate effects. We conduct an empirical study by applying three methods (FANOVA, SKAT and FLM) to sequencing data from Dallas Heart Study. While SKAT and FLM respectively detected ANGPTL 4 and ANGPTL 3 associated with obesity, FANOVA was able to identify both genes associated with obesity. PMID:25244256
A noise variance estimation approach for CT
NASA Astrophysics Data System (ADS)
Shen, Le; Jin, Xin; Xing, Yuxiang
2012-10-01
The Poisson-like noise model has been widely used for noise suppression and image reconstruction in low dose computed tomography. Various noise estimation and suppression approaches have been developed and studied to enhance the image quality. Among them, the recently proposed generalized Anscombe transform (GAT) has been utilized to stabilize the variance of Poisson-Gaussian noise. In this paper, we present a variance estimation approach using GAT. After the transform, the projection data is denoised conventionally with an assumption that the noise variance is uniformly equals to 1. The difference of the original and the denoised projection is treated as pure noise and the global variance σ2 can be estimated from the residual difference. Thus, the final denoising step with the estimated σ2 is performed. The proposed approach is verified on a cone-beam CT system and demonstrated to obtain a more accurate estimation of the actual parameter. We also examine FBP algorithm with the two-step noise suppression in the projection domain using the estimated noise variance. Reconstruction results with simulated and practical projection data suggest that the presented approach could be effective in practical imaging applications.
Wave propagation analysis using the variance matrix.
Sharma, Richa; Ivan, J Solomon; Narayanamurthy, C S
2014-10-01
The propagation of a coherent laser wave-field through a pseudo-random phase plate is studied using the variance matrix estimated from Shack-Hartmann wavefront sensor data. The uncertainty principle is used as a tool in discriminating the data obtained from the Shack-Hartmann wavefront sensor. Quantities of physical interest such as the twist parameter, and the symplectic eigenvalues, are estimated from the wavefront sensor measurements. A distance measure between two variance matrices is introduced and used to estimate the spatial asymmetry of a wave-field in the experiment. The estimated quantities are then used to compare a distorted wave-field with its undistorted counterpart. PMID:25401243
GR uniqueness and deformations
NASA Astrophysics Data System (ADS)
Krasnov, Kirill
2015-10-01
In the metric formulation gravitons are described with the parity symmetric S + 2 ⊗ S - 2 representation of Lorentz group. General Relativity is then the unique theory of interacting gravitons with second order field equations. We show that if a chiral S + 3 ⊗ S - representation is used instead, the uniqueness is lost, and there is an infinite-parametric family of theories of interacting gravitons with second order field equations. We use the language of graviton scattering amplitudes, and show how the uniqueness of GR is avoided using simple dimensional analysis. The resulting distinct from GR gravity theories are all parity asymmetric, but share the GR MHV amplitudes. They have new all same helicity graviton scattering amplitudes at every graviton order. The amplitudes with at least one graviton of opposite helicity continue to be determinable by the BCFW recursion.
Large-scale magnetic variances near the South Solar Pole
NASA Technical Reports Server (NTRS)
Jokipii, J. R.; Kota, J.; Smith, E.; Horbury, T.; Giacalone, J.
1995-01-01
We summarize recent Ulysses observations of the variances over large temporal scales in the interplanetary magnetic field components and their increase as Ulysses approached the South Solar Pole. A model of these fluctuations is shown to provide a very good fit to the observed amplitude and temporal variation of the fluctuations. In addition, the model predicts that the transport of cosmic rays in the heliosphere will be significantly altered by this level of fluctuations. In addition to altering the inward diffusion and drift access of cosmic rays over the solar poles, we find that the magnetic fluctuations also imply a large latitudinal diffusion, caused primarily by the associated field-line random walk.
Creativity and technical innovation: spatial ability's unique role.
Kell, Harrison J; Lubinski, David; Benbow, Camilla P; Steiger, James H
2013-09-01
In the late 1970s, 563 intellectually talented 13-year-olds (identified by the SAT as in the top 0.5% of ability) were assessed on spatial ability. More than 30 years later, the present study evaluated whether spatial ability provided incremental validity (beyond the SAT's mathematical and verbal reasoning subtests) for differentially predicting which of these individuals had patents and three classes of refereed publications. A two-step discriminant-function analysis revealed that the SAT subtests jointly accounted for 10.8% of the variance among these outcomes (p < .01); when spatial ability was added, an additional 7.6% was accounted for--a statistically significant increase (p < .01). The findings indicate that spatial ability has a unique role in the development of creativity, beyond the roles played by the abilities traditionally measured in educational selection, counseling, and industrial-organizational psychology. Spatial ability plays a key and unique role in structuring many important psychological phenomena and should be examined more broadly across the applied and basic psychological sciences.
Code of Federal Regulations, 2010 CFR
2010-01-01
... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2012 CFR
2012-01-01
... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2013 CFR
2013-01-01
... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2014 CFR
2014-01-01
... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2011 CFR
2011-01-01
... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Regression Calibration with Heteroscedastic Error Variance
Spiegelman, Donna; Logan, Roger; Grove, Douglas
2011-01-01
The problem of covariate measurement error with heteroscedastic measurement error variance is considered. Standard regression calibration assumes that the measurement error has a homoscedastic measurement error variance. An estimator is proposed to correct regression coefficients for covariate measurement error with heteroscedastic variance. Point and interval estimates are derived. Validation data containing the gold standard must be available. This estimator is a closed-form correction of the uncorrected primary regression coefficients, which may be of logistic or Cox proportional hazards model form, and is closely related to the version of regression calibration developed by Rosner et al. (1990). The primary regression model can include multiple covariates measured without error. The use of these estimators is illustrated in two data sets, one taken from occupational epidemiology (the ACE study) and one taken from nutritional epidemiology (the Nurses’ Health Study). In both cases, although there was evidence of moderate heteroscedasticity, there was little difference in estimation or inference using this new procedure compared to standard regression calibration. It is shown theoretically that unless the relative risk is large or measurement error severe, standard regression calibration approximations will typically be adequate, even with moderate heteroscedasticity in the measurement error model variance. In a detailed simulation study, standard regression calibration performed either as well as or better than the new estimator. When the disease is rare and the errors normally distributed, or when measurement error is moderate, standard regression calibration remains the method of choice. PMID:22848187
Code of Federal Regulations, 2010 CFR
2010-04-01
... 18 Conservation of Power and Water Resources 2 2010-04-01 2010-04-01 false Variances. 1304.408 Section 1304.408 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY APPROVAL OF... whether a proposed structure or other regulated activity would adversely impact navigation, flood...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 18 Conservation of Power and Water Resources 2 2011-04-01 2011-04-01 false Variances. 1304.408 Section 1304.408 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY APPROVAL OF... whether a proposed structure or other regulated activity would adversely impact navigation, flood...
Multiple Comparison Procedures when Population Variances Differ.
ERIC Educational Resources Information Center
Olejnik, Stephen; Lee, JaeShin
A review of the literature on multiple comparison procedures suggests several alternative approaches for comparing means when population variances differ. These include: (1) the approach of P. A. Games and J. F. Howell (1976); (2) C. W. Dunnett's C confidence interval (1980); and (3) Dunnett's T3 solution (1980). These procedures control the…
Variance Anisotropy of Solar Wind fluctuations
NASA Astrophysics Data System (ADS)
Oughton, S.; Matthaeus, W. H.; Wan, M.; Osman, K.
2013-12-01
Solar wind observations at MHD scales indicate that the energy associated with velocity and magnetic field fluctuations transverse to the mean magnetic field is typically much larger than that associated with parallel fluctuations [eg, 1]. This is often referred to as variance anisotropy. Various explanations for it have been suggested, including that the fluctuations are predominantly shear Alfven waves [1] and that turbulent dynamics leads to such states [eg, 2]. Here we investigate the origin and strength of such variance anisotropies, using spectral method simulations of the compressible (polytropic) 3D MHD equations. We report on results from runs with initial conditions that are either (i) broadband turbulence or (ii) fluctuations polarized in the same sense as shear Alfven waves. The dependence of the variance anisotropy on the plasma beta and Mach number is examined [3], along with the timescale for any variance anisotropy to develop. Implications for solar wind fluctuations will be discussed. References: [1] Belcher, J. W. and Davis Jr., L. (1971), J. Geophys. Res., 76, 3534. [2] Matthaeus, W. H., Ghosh, S., Oughton, S. and Roberts, D. A. (1996), J. Geophys. Res., 101, 7619. [3] Smith, C. W., B. J. Vasquez and K. Hamilton (2006), J. Geophys. Res., 111, A09111.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) or 6(d) of the Williams-Steiger Occupational Safety and Health Act of 1970 (29 U.S.C. 655). The... under the Williams-Steiger Occupational Safety and Health Act of 1970, and any variance from §§ 1910.13... from the standard under both the Longshoremen's and Harbor Workers' Compensation Act and the...
7 CFR 205.290 - Temporary variances.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Temporary variances. 205.290 Section 205.290 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) ORGANIC FOODS PRODUCTION...
Number variance for arithmetic hyperbolic surfaces
NASA Astrophysics Data System (ADS)
Luo, W.; Sarnak, P.
1994-03-01
We prove that the number variance for the spectrum of an arithmetic surface is highly nonrigid in part of the universal range. In fact it is close to having a Poisson behavior. This fact was discovered numerically by Schmit, Bogomolny, Georgeot and Giannoni. It has its origin in the high degeneracy of the length spectrum, first observed by Selberg.
7 CFR 205.290 - Temporary variances.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Temporary variances. 205.290 Section 205.290 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) ORGANIC FOODS PRODUCTION ACT PROVISIONS NATIONAL ORGANIC PROGRAM...
Formative Use of Intuitive Analysis of Variance
ERIC Educational Resources Information Center
Trumpower, David L.
2013-01-01
Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In…
Code of Federal Regulations, 2011 CFR
2011-04-01
...' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR SPECIAL RESPONSIBILITIES OF THE EMPLOYMENT SERVICE SYSTEM Housing for Agricultural Workers Purpose and Applicability § 654.402 Variances. (a... employment service complaint procedures set forth at §§ 658.421 (i) and (j), 658.422 and 658.423 of...
78 FR 14122 - Revocation of Permanent Variances
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-04
... OSHA's scaffolds standards for construction (77 FR 46948). Today's notice revoking the variances takes..., construction, and use of scaffolds (61 FR 46026). In the preamble to the final rule, OSHA stated that it was... for tank scaffolds under the general provisions of the final rule (see 61 FR 46033). In this...
ERIC Educational Resources Information Center
Goble, Don
2009-01-01
This article describes the many learning opportunities that broadcast technology students at Ladue Horton Watkins High School in St. Louis, Missouri, experience because of their unique access to technology and methods of learning. Through scaffolding, stepladder techniques, and trial by fire, students learn to produce multiple television programs,…
Genetic and environmental heterogeneity of residual variance of weight traits in Nellore beef cattle
2012-01-01
Background Many studies have provided evidence of the existence of genetic heterogeneity of environmental variance, suggesting that it could be exploited to improve robustness and uniformity of livestock by selection. However, little is known about the perspectives of such a selection strategy in beef cattle. Methods A two-step approach was applied to study the genetic heterogeneity of residual variance of weight gain from birth to weaning and long-yearling weight in a Nellore beef cattle population. First, an animal model was fitted to the data and second, the influence of additive and environmental effects on the residual variance of these traits was investigated with different models, in which the log squared estimated residuals for each phenotypic record were analyzed using the restricted maximum likelihood method. Monte Carlo simulation was performed to assess the reliability of variance component estimates from the second step and the accuracy of estimated breeding values for residual variation. Results The results suggest that both genetic and environmental factors have an effect on the residual variance of weight gain from birth to weaning and long-yearling in Nellore beef cattle and that uniformity of these traits could be improved by selecting for lower residual variance, when considering a large amount of information to predict genetic merit for this criterion. Simulations suggested that using the two-step approach would lead to biased estimates of variance components, such that more adequate methods are needed to study the genetic heterogeneity of residual variance in beef cattle. PMID:22672564
42 CFR 456.521 - Conditions for granting variance requests.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time... submits concurrently— (1) A request for the variance that documents to his satisfaction that the facility is unable to meet the time requirements for which the variance is requested; and (2) A revised...
Variance Reduction Factor of Nuclear Data for Integral Neutronics Parameters
Chiba, G. Tsuji, M.; Narabayashi, T.
2015-01-15
We propose a new quantity, a variance reduction factor, to identify nuclear data for which further improvements are required to reduce uncertainties of target integral neutronics parameters. Important energy ranges can be also identified with this variance reduction factor. Variance reduction factors are calculated for several integral neutronics parameters. The usefulness of the variance reduction factors is demonstrated.
Analysis of Variance of Multiply Imputed Data.
van Ginkel, Joost R; Kroonenberg, Pieter M
2014-01-01
As a procedure for handling missing data, Multiple imputation consists of estimating the missing data multiple times to create several complete versions of an incomplete data set. All these data sets are analyzed by the same statistical procedure, and the results are pooled for interpretation. So far, no explicit rules for pooling F-tests of (repeated-measures) analysis of variance have been defined. In this paper we outline the appropriate procedure for the results of analysis of variance for multiply imputed data sets. It involves both reformulation of the ANOVA model as a regression model using effect coding of the predictors and applying already existing combination rules for regression models. The proposed procedure is illustrated using three example data sets. The pooled results of these three examples provide plausible F- and p-values.
Analysis of variance of microarray data.
Ayroles, Julien F; Gibson, Greg
2006-01-01
Analysis of variance (ANOVA) is an approach used to identify differentially expressed genes in complex experimental designs. It is based on testing for the significance of the magnitude of effect of two or more treatments taking into account the variance within and between treatment classes. ANOVA is a highly flexible analytical approach that allows investigators to simultaneously assess the contributions of multiple factors to gene expression variation, including technical (dye, batch) effects and biological (sex, genotype, drug, time) ones, as well as interactions between factors. This chapter provides an overview of the theory of linear mixture modeling and the sequence of steps involved in fitting gene-specific models and discusses essential features of experimental design. Commercial and open-source software for performing ANOVA is widely available.
PHD filtering with localised target number variance
NASA Astrophysics Data System (ADS)
Delande, Emmanuel; Houssineau, Jérémie; Clark, Daniel
2013-05-01
Mahler's Probability Hypothesis Density (PHD filter), proposed in 2000, addresses the challenges of the multipletarget detection and tracking problem by propagating a mean density of the targets in any region of the state space. However, when retrieving some local evidence on the target presence becomes a critical component of a larger process - e.g. for sensor management purposes - the local target number is insufficient unless some confidence on the estimation of the number of targets can be provided as well. In this paper, we propose a first implementation of a PHD filter that also includes an estimation of localised variance in the target number following each update step; we then illustrate the advantage of the PHD filter + variance on simulated data from a multiple-target scenario.
Uses and abuses of analysis of variance.
Evans, S J
1983-01-01
Analysis of variance is a term often quoted to explain the analysis of data in experiments and clinical trials. The relevance of its methodology to clinical trials is shown and an explanation of the principles of the technique is given. The assumptions necessary are examined and the problems caused by their violation are discussed. The dangers of misuse are given with some suggestions for alternative approaches. PMID:6347228
Applications of non-parametric statistics and analysis of variance on sample variances
NASA Technical Reports Server (NTRS)
Myers, R. H.
1981-01-01
Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.
Hypothesis exploration with visualization of variance
2014-01-01
Background The Consortium for Neuropsychiatric Phenomics (CNP) at UCLA was an investigation into the biological bases of traits such as memory and response inhibition phenotypes—to explore whether they are linked to syndromes including ADHD, Bipolar disorder, and Schizophrenia. An aim of the consortium was in moving from traditional categorical approaches for psychiatric syndromes towards more quantitative approaches based on large-scale analysis of the space of human variation. It represented an application of phenomics—wide-scale, systematic study of phenotypes—to neuropsychiatry research. Results This paper reports on a system for exploration of hypotheses in data obtained from the LA2K, LA3C, and LA5C studies in CNP. ViVA is a system for exploratory data analysis using novel mathematical models and methods for visualization of variance. An example of these methods is called VISOVA, a combination of visualization and analysis of variance, with the flavor of exploration associated with ANOVA in biomedical hypothesis generation. It permits visual identification of phenotype profiles—patterns of values across phenotypes—that characterize groups. Visualization enables screening and refinement of hypotheses about variance structure of sets of phenotypes. Conclusions The ViVA system was designed for exploration of neuropsychiatric hypotheses by interdisciplinary teams. Automated visualization in ViVA supports ‘natural selection’ on a pool of hypotheses, and permits deeper understanding of the statistical architecture of the data. Large-scale perspective of this kind could lead to better neuropsychiatric diagnostics. PMID:25097666
Mixed emotions: Sensitivity to facial variance in a crowd of faces.
Haberman, Jason; Lee, Pegan; Whitney, David
2015-01-01
The visual system automatically represents summary information from crowds of faces, such as the average expression. This is a useful heuristic insofar as it provides critical information about the state of the world, not simply information about the state of one individual. However, the average alone is not sufficient for making decisions about how to respond to a crowd. The variance or heterogeneity of the crowd--the mixture of emotions--conveys information about the reliability of the average, essential for determining whether the average can be trusted. Despite its importance, the representation of variance within a crowd of faces has yet to be examined. This is addressed here in three experiments. In the first experiment, observers viewed a sample set of faces that varied in emotion, and then adjusted a subsequent set to match the variance of the sample set. To isolate variance as the summary statistic of interest, the average emotion of both sets was random. Results suggested that observers had information regarding crowd variance. The second experiment verified that this was indeed a uniquely high-level phenomenon, as observers were unable to derive the variance of an inverted set of faces as precisely as an upright set of faces. The third experiment replicated and extended the first two experiments using method-of-constant-stimuli. Together, these results show that the visual system is sensitive to emergent information about the emotional heterogeneity, or ambivalence, in crowds of faces. PMID:26676106
Irreversible Langevin samplers and variance reduction: a large deviations approach
NASA Astrophysics Data System (ADS)
Rey-Bellet, Luc; Spiliopoulos, Konstantinos
2015-07-01
In order to sample from a given target distribution (often of Gibbs type), the Monte Carlo Markov chain method consists of constructing an ergodic Markov process whose invariant measure is the target distribution. By sampling the Markov process one can then compute, approximately, expectations of observables with respect to the target distribution. Often the Markov processes used in practice are time-reversible (i.e. they satisfy detailed balance), but our main goal here is to assess and quantify how the addition of a non-reversible part to the process can be used to improve the sampling properties. We focus on the diffusion setting (overdamped Langevin equations) where the drift consists of a gradient vector field as well as another drift which breaks the reversibility of the process but is chosen to preserve the Gibbs measure. In this paper we use the large deviation rate function for the empirical measure as a tool to analyze the speed of convergence to the invariant measure. We show that the addition of an irreversible drift leads to a larger rate function and it strictly improves the speed of convergence of ergodic average for (generic smooth) observables. We also deduce from this result that the asymptotic variance decreases under the addition of the irreversible drift and we give an explicit characterization of the observables whose variance is not reduced reduced, in terms of a nonlinear Poisson equation. Our theoretical results are illustrated and supplemented by numerical simulations.
Abel, David L.
2011-01-01
Is life physicochemically unique? No. Is life unique? Yes. Life manifests innumerable formalisms that cannot be generated or explained by physicodynamics alone. Life pursues thousands of biofunctional goals, not the least of which is staying alive. Neither physicodynamics, nor evolution, pursue goals. Life is largely directed by linear digital programming and by the Prescriptive Information (PI) instantiated particularly into physicodynamically indeterminate nucleotide sequencing. Epigenomic controls only compound the sophistication of these formalisms. Life employs representationalism through the use of symbol systems. Life manifests autonomy, homeostasis far from equilibrium in the harshest of environments, positive and negative feedback mechanisms, prevention and correction of its own errors, and organization of its components into Sustained Functional Systems (SFS). Chance and necessity—heat agitation and the cause-and-effect determinism of nature’s orderliness—cannot spawn formalisms such as mathematics, language, symbol systems, coding, decoding, logic, organization (not to be confused with mere self-ordering), integration of circuits, computational success, and the pursuit of functionality. All of these characteristics of life are formal, not physical. PMID:25382119
Estimation of Noise-Free Variance to Measure Heterogeneity
Winkler, Tilo; Melo, Marcos F. Vidal; Degani-Costa, Luiza H.; Harris, R. Scott; Correia, John A.; Musch, Guido; Venegas, Jose G.
2015-01-01
Variance is a statistical parameter used to characterize heterogeneity or variability in data sets. However, measurements commonly include noise, as random errors superimposed to the actual value, which may substantially increase the variance compared to a noise-free data set. Our aim was to develop and validate a method to estimate noise-free spatial heterogeneity of pulmonary perfusion using dynamic positron emission tomography (PET) scans. On theoretical grounds, we demonstrate a linear relationship between the total variance of a data set derived from averages of n multiple measurements, and the reciprocal of n. Using multiple measurements with varying n yields estimates of the linear relationship including the noise-free variance as the constant parameter. In PET images, n is proportional to the number of registered decay events, and the variance of the image is typically normalized by the square of its mean value yielding a coefficient of variation squared (CV2). The method was evaluated with a Jaszczak phantom as reference spatial heterogeneity (CVr2) for comparison with our estimate of noise-free or ‘true’ heterogeneity (CVt2). We found that CVt2 was only 5.4% higher than CVr2. Additional evaluations were conducted on 38 PET scans of pulmonary perfusion using 13NN-saline injection. The mean CVt2 was 0.10 (range: 0.03–0.30), while the mean CV2 including noise was 0.24 (range: 0.10–0.59). CVt2 was in average 41.5% of the CV2 measured including noise (range: 17.8–71.2%). The reproducibility of CVt2 was evaluated using three repeated PET scans from five subjects. Individual CVt2 were within 16% of each subject's mean and paired t-tests revealed no difference among the results from the three consecutive PET scans. In conclusion, our method provides reliable noise-free estimates of CVt2 in PET scans, and may be useful for similar statistical problems in experimental data. PMID:25906374
FMRI group analysis combining effect estimates and their variances
Chen, Gang; Saad, Ziad S.; Nath, Audrey R.; Beauchamp, Michael S.; Cox, Robert W.
2012-01-01
Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an approach that does not make such strong assumptions, and present a computationally efficient frequentist approach to FMRI group analysis, which we term mixed-effects multilevel analysis (MEMA), that incorporates both the variability across subjects and the precision estimate of each effect of interest from individual subject analyses. On average, the more accurate tests result in higher statistical power, especially when conventional variance assumptions do not hold, or in the presence of outliers. In addition, various heterogeneity measures are available with MEMA that may assist the investigator in further improving the modeling. Our method allows group effect t-tests and comparisons among conditions and among groups. In addition, it has the capability to incorporate subject-specific covariates such as age, IQ, or behavioral data. Simulations were performed to illustrate power comparisons and the capability of controlling type I errors among various significance testing methods, and the results indicated that the testing statistic we adopted struck a good balance between power gain and type I error control. Our approach is instantiated in an open-source, freely distributed program that may be used on any dataset stored in the universal neuroimaging file transfer (NIfTI) format. To date, the main impediment for more accurate testing that incorporates both within- and cross-subject variability has been the high computational cost. Our efficient implementation makes this approach
Visual SLAM Using Variance Grid Maps
NASA Technical Reports Server (NTRS)
Howard, Andrew B.; Marks, Tim K.
2011-01-01
An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance
Clarke, Peter; Varghese, Philip; Goldstein, David
2014-12-09
We extend a variance reduced discrete velocity method developed at UT Austin [1, 2] to gas mixtures with large mass ratios and flows with trace species. The mixture is stored as a collection of independent velocity distribution functions, each with a unique grid in velocity space. Different collision types (A-A, A-B, B-B, etc.) are treated independently, and the variance reduction scheme is formulated with different equilibrium functions for each separate collision type. The individual treatment of species enables increased focus on species important to the physics of the flow, even if the important species are present in trace amounts. The method is verified through comparisons to Direct Simulation Monte Carlo computations and the computational workload per time step is investigated for the variance reduced method.
An Empirical Temperature Variance Source Model in Heated Jets
NASA Technical Reports Server (NTRS)
Khavaran, Abbas; Bridges, James
2012-01-01
An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.
THE COLUMN DENSITY VARIANCE-M{sub s} RELATIONSHIP
Burkhart, Blakesley; Lazarian, A.
2012-08-10
Although there is a wealth of column density tracers for both the molecular and diffuse interstellar medium, there are few observational studies investigating the relationship between the density variance ({sigma}{sup 2}) and the sonic Mach number (M{sub s}). This is in part due to the fact that the {sigma}{sup 2}-M{sub s} relationship is derived, via MHD simulations, for the three-dimensional (3D) density variance only, which is not a direct observable. We investigate the utility of a 2D column density {sigma}{sub {Sigma}/{Sigma}0}{sup 2}-M{sub s} relationship using solenoidally driven isothermal MHD simulations and find that the best fit follows closely the form of the 3D density {sigma}{sub {rho}/{rho}0}{sup 2}-M{sub s} trend but includes a scaling parameter A such that {sigma}{sub ln({Sigma}/{Sigma}o)} = A x ln(1+b{sup 2} M{sub s}{sup 2}), where A = 0.11 and b = 1/3. This relation is consistent with the observational data reported for the Taurus and IC 5146 molecular clouds with b = 0.5 and A = 0.16, and b = 0.5 and A = 0.12, respectively. These results open up the possibility of using the 2D column density values of {sigma}{sup 2} for investigations of the relation between the sonic Mach number and the probability distribution function (PDF) variance in addition to existing PDF sonic Mach number relations.
Calculating bone-lead measurement variance.
Todd, A C
2000-01-01
The technique of (109)Cd-based X-ray fluorescence (XRF) measurements of lead in bone is well established. A paper by some XRF researchers [Gordon CL, et al. The Reproducibility of (109)Cd-based X-ray Fluorescence Measurements of Bone Lead. Environ Health Perspect 102:690-694 (1994)] presented the currently practiced method for calculating the variance of an in vivo measurement once a calibration line has been established. This paper corrects typographical errors in the method published by those authors; presents a crude estimate of the measurement error that can be acquired without computational peak fitting programs; and draws attention to the measurement error attributable to covariance, an important feature in the construct of the currently accepted method that is flawed under certain circumstances. PMID:10811562
42 CFR 456.522 - Content of request for variance.
Code of Federal Regulations, 2010 CFR
2010-10-01
... travel time between the remote facility and each facility listed in paragraph (e) of this section; (f..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time... perform UR within the time requirements for which the variance is requested and its good faith efforts...
Dynamics of mean-variance-skewness of cumulative crop yield impact temporal yield variance
Technology Transfer Automated Retrieval System (TEKTRAN)
Production risk associated with cropping systems influences farmers’ decisions to adopt a new management practice or a production system. Cumulative yield (CY), temporal yield variance (TYV) and coefficient of variation (CV) were used to assess the risk associated with adopting combinations of new m...
The Parabolic Variance (PVAR): A Wavelet Variance Based on the Least-Square Fit.
Vernotte, Francois; Lenczner, Michel; Bourgeois, Pierre-Yves; Rubiola, Enrico
2016-04-01
This paper introduces the parabolic variance (PVAR), a wavelet variance similar to the Allan variance (AVAR), based on the linear regression (LR) of phase data. The companion article arXiv:1506.05009 [physics.ins-det] details the Ω frequency counter, which implements the LR estimate. The PVAR combines the advantages of AVAR and modified AVAR (MVAR). PVAR is good for long-term analysis because the wavelet spans over 2τ, the same as the AVAR wavelet, and good for short-term analysis because the response to white and flicker PM is 1/τ(3) and 1/τ(2), the same as the MVAR. After setting the theoretical framework, we study the degrees of freedom and the confidence interval for the most common noise types. Then, we focus on the detection of a weak noise process at the transition-or corner-where a faster process rolls off. This new perspective raises the question of which variance detects the weak process with the shortest data record. Our simulations show that PVAR is a fortunate tradeoff. PVAR is superior to MVAR in all cases, exhibits the best ability to divide between fast noise phenomena (up to flicker FM), and is almost as good as AVAR for the detection of random walk and drift. PMID:26571523
The Parabolic Variance (PVAR): A Wavelet Variance Based on the Least-Square Fit.
Vernotte, Francois; Lenczner, Michel; Bourgeois, Pierre-Yves; Rubiola, Enrico
2016-04-01
This paper introduces the parabolic variance (PVAR), a wavelet variance similar to the Allan variance (AVAR), based on the linear regression (LR) of phase data. The companion article arXiv:1506.05009 [physics.ins-det] details the Ω frequency counter, which implements the LR estimate. The PVAR combines the advantages of AVAR and modified AVAR (MVAR). PVAR is good for long-term analysis because the wavelet spans over 2τ, the same as the AVAR wavelet, and good for short-term analysis because the response to white and flicker PM is 1/τ(3) and 1/τ(2), the same as the MVAR. After setting the theoretical framework, we study the degrees of freedom and the confidence interval for the most common noise types. Then, we focus on the detection of a weak noise process at the transition-or corner-where a faster process rolls off. This new perspective raises the question of which variance detects the weak process with the shortest data record. Our simulations show that PVAR is a fortunate tradeoff. PVAR is superior to MVAR in all cases, exhibits the best ability to divide between fast noise phenomena (up to flicker FM), and is almost as good as AVAR for the detection of random walk and drift.
Recognition by variance: learning rules for spatiotemporal patterns.
Barak, Omri; Tsodyks, Misha
2006-10-01
Recognizing specific spatiotemporal patterns of activity, which take place at timescales much larger than the synaptic transmission and membrane time constants, is a demand from the nervous system exemplified, for instance, by auditory processing. We consider the total synaptic input that a single readout neuron receives on presentation of spatiotemporal spiking input patterns. Relying on the monotonic relation between the mean and the variance of a neuron's input current and its spiking output, we derive learning rules that increase the variance of the input current evoked by learned patterns relative to that obtained from random background patterns. We demonstrate that the model can successfully recognize a large number of patterns and exhibits a slow deterioration in performance with increasing number of learned patterns. In addition, robustness to time warping of the input patterns is revealed to be an emergent property of the model. Using a leaky integrate-and-fire realization of the readout neuron, we demonstrate that the above results also apply when considering spiking output. PMID:16907629
Argentine Population Genetic Structure: Large Variance in Amerindian Contribution
Seldin, Michael F.; Tian, Chao; Shigeta, Russell; Scherbarth, Hugo R.; Silva, Gabriel; Belmont, John W.; Kittles, Rick; Gamron, Susana; Allevi, Alberto; Palatnik, Simon A.; Alvarellos, Alejandro; Paira, Sergio; Caprarulo, Cesar; Guillerón, Carolina; Catoggio, Luis J.; Prigione, Cristina; Berbotto, Guillermo A.; García, Mercedes A.; Perandones, Carlos E.; Pons-Estel, Bernardo A.; Alarcon-Riquelme, Marta E.
2011-01-01
Argentine population genetic structure was examined using a set of 78 ancestry informative markers (AIMs) to assess the contributions of European, Amerindian, and African ancestry in 94 individuals members of this population. Using the Bayesian clustering algorithm STRUCTURE, the mean European contribution was 78%, the Amerindian contribution was 19.4%, and the African contribution was 2.5%. Similar results were found using weighted least mean square method: European, 80.2%; Amerindian, 18.1%; and African, 1.7%. Consistent with previous studies the current results showed very few individuals (four of 94) with greater than 10% African admixture. Notably, when individual admixture was examined, the Amerindian and European admixture showed a very large variance and individual Amerindian contribution ranged from 1.5 to 84.5% in the 94 individual Argentine subjects. These results indicate that admixture must be considered when clinical epidemiology or case control genetic analyses are studied in this population. Moreover, the current study provides a set of informative SNPs that can be used to ascertain or control for this potentially hidden stratification. In addition, the large variance in admixture proportions in individual Argentine subjects shown by this study suggests that this population is appropriate for future admixture mapping studies. PMID:17177183
Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†
Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia
2015-01-01
Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144
Bottleneck effect on genetic variance. A theoretical investigation of the role of dominance.
Wang, J; Caballero, A; Keightley, P D; Hill, W G
1998-01-01
The phenomenon that the genetic variance of fitness components increase following a bottleneck or inbreeding is supported by a growing number of experiments and is explained theoretically by either dominance or epistasis. In this article, diffusion approximations under the infinite sites model are used to quantify the effect of dominance, using data on viability in Drosophila melanogaster. The model is based on mutation parameters from mutation accumulation experiments involving balancer chromosomes (set I) or inbred lines (set II). In essence, set I assumes many mutations of small effect, whereas set II assumes fewer mutations of large effect. Compared to empirical estimates from large outbred populations, set I predicts reasonable genetic variances but too low mean viability. In contrast, set II predicts a reasonable mean viability but a low genetic variance. Both sets of parameters predict the changes in mean viability (depression), additive variance, between-line variance and heritability following bottlenecks generally compatible with empirical results, and these changes are mainly caused by lethals and deleterious mutants of large effect. This article suggests that dominance is the main cause for increased genetic variances for fitness components and fitness-related traits after bottlenecks observed in various experiments. PMID:9725859
Methods to estimate the between-study variance and its uncertainty in meta-analysis.
Veroniki, Areti Angeliki; Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian P T; Langan, Dean; Salanti, Georgia
2016-03-01
Meta-analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between-study variability, which is typically modelled using a between-study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between-study variance, has been long challenged. Our aim is to identify known methods for estimation of the between-study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between-study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between-study variance. Based on the scenarios and results presented in the published studies, we recommend the Q-profile method and the alternative approach based on a 'generalised Cochran between-study variance statistic' to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence-based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. PMID:26332144
Estimating discharge measurement uncertainty using the interpolated variance estimator
Cohn, T.; Kiang, J.; Mason, R.
2012-01-01
Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.
Individualized Additional Instruction for Calculus
ERIC Educational Resources Information Center
Takata, Ken
2010-01-01
College students enrolling in the calculus sequence have a wide variance in their preparation and abilities, yet they are usually taught from the same lecture. We describe another pedagogical model of Individualized Additional Instruction (IAI) that assesses each student frequently and prescribes further instruction and homework based on the…
Cyclostationary analysis with logarithmic variance stabilisation
NASA Astrophysics Data System (ADS)
Borghesani, Pietro; Shahriar, Md Rifat
2016-03-01
Second order cyclostationary (CS2) components in vibration or acoustic emission signals are typical symptoms of a wide variety of faults in rotating and alternating mechanical systems. The square envelope spectrum (SES), obtained via Hilbert transform of the original signal, is at the basis of the most common indicators used for detection of CS2 components. It has been shown that the SES is equivalent to an autocorrelation of the signal's discrete Fourier transform, and that CS2 components are a cause of high correlations in the frequency domain of the signal, thus resulting in peaks in the SES. Statistical tests have been proposed to determine if peaks in the SES are likely to belong to a normal variability in the signal or if they are proper symptoms of CS2 components. Despite the need for automated fault recognition and the theoretical soundness of these tests, this approach to machine diagnostics has been mostly neglected in industrial applications. In fact, in a series of experimental applications, even with proper pre-whitening steps, it has been found that healthy machines might produce high spectral correlations and therefore result in a highly biased SES distribution which might cause a series of false positives. In this paper a new envelope spectrum is defined, with the theoretical intent of rendering the hypothesis test variance-free. This newly proposed indicator will prove unbiased in case of multiple CS2 sources of spectral correlation, thus reducing the risk of false alarms.
Correcting an analysis of variance for clustering.
Hedges, Larry V; Rhoads, Christopher H
2011-02-01
A great deal of educational and social data arises from cluster sampling designs where clusters involve schools, classrooms, or communities. A mistake that is sometimes encountered in the analysis of such data is to ignore the effect of clustering and analyse the data as if it were based on a simple random sample. This typically leads to an overstatement of the precision of results and too liberal conclusions about precision and statistical significance of mean differences. This paper gives simple corrections to the test statistics that would be computed in an analysis of variance if clustering were (incorrectly) ignored. The corrections are multiplicative factors depending on the total sample size, the cluster size, and the intraclass correlation structure. For example, the corrected F statistic has Fisher's F distribution with reduced degrees of freedom. The corrected statistic reduces to the F statistic computed by ignoring clustering when the intraclass correlations are zero. It reduces to the F statistic computed using cluster means when the intraclass correlations are unity, and it is in between otherwise. A similar adjustment to the usual statistic for testing a linear contrast among group means is described.
Event Segmentation Ability Uniquely Predicts Event Memory
Sargent, Jesse Q.; Zacks, Jeffrey M.; Hambrick, David Z.; Zacks, Rose T.; Kurby, Christopher A.; Bailey, Heather R.; Eisenberg, Michelle L.; Beck, Taylor M.
2013-01-01
Memory for everyday events plays a central role in tasks of daily living, autobiographical memory, and planning. Event memory depends in part on segmenting ongoing activity into meaningful units. This study examined the relationship between event segmentation and memory in a lifespan sample to answer the following question: Is the ability to segment activity into meaningful events a unique predictor of subsequent memory, or is the relationship between event perception and memory accounted for by general cognitive abilities? Two hundred and eight adults ranging from 20 to 79 years old segmented movies of everyday events and attempted to remember the events afterwards. They also completed psychometric ability tests and tests measuring script knowledge for everyday events. Event segmentation and script knowledge both explained unique variance in event memory above and beyond the psychometric measures, and did so as strongly in older as in younger adults. These results suggest that event segmentation is a basic cognitive mechanism, important for memory across the lifespan. PMID:23942350
Hill, W G; Mäki-Tanila, A
2015-04-01
Linkage disequilibrium (LD) influences the genetic variation in a quantitative trait contributed by two or more loci, with positive LD increasing the variance. The magnitude of LD also affects the relative magnitude of dominance and epistatic variation. We quantify the extent of the non-additive variance expected within populations, deriving analytical expressions for simple models and using numerical simulation in finite population more generally. As LD generates non-independence among loci, a simple partition into additive, dominance and epistatic components is not possible, so we merely distinguish between additive and non-additive components based on comparing covariances among close relatives, such as full sibs, half sibs and offspring-parent. As tight linkage is needed to yield substantial LD in outbred populations, we ignore recombination in the generation used to estimate components and it is analogous to a multi-allelic model. The expected magnitude of the non-additive variance is generally increased but not greatly so by the LD in outbred populations. Thus, as found in previous studies for unlinked loci, independent of the type and strength of gene interaction, the epistatic variance contributes little to the total.
The genetic and environmental roots of variance in negativity toward foreign nationals.
Kandler, Christian; Lewis, Gary J; Feldhaus, Lea Henrike; Riemann, Rainer
2015-03-01
This study quantified genetic and environmental roots of variance in prejudice and discriminatory intent toward foreign nationals and examined potential mediators of these genetic influences: right-wing authoritarianism (RWA), social dominance orientation (SDO), and narrow-sense xenophobia (NSX). In line with the dual process motivational (DPM) model, we predicted that the two basic attitudinal and motivational orientations-RWA and SDO-would account for variance in out-group prejudice and discrimination. In line with other theories, we expected that NSX as an affective component would explain additional variance in out-group prejudice and discriminatory intent. Data from 1,397 individuals (incl. twins as well as their spouses) were analyzed. Univariate analyses of twins' and spouses' data yielded genetic (incl. contributions of assortative mating) and multiple environmental sources (i.e., social homogamy, spouse-specific, and individual-specific effects) of variance in negativity toward strangers. Multivariate analyses suggested an extension to the DPM model by including NSX in addition to RWA and SDO as predictor of prejudice and discrimination. RWA and NSX primarily mediated the genetic influences on the variance in prejudice and discriminatory intent toward foreign nationals. In sum, the findings provide the basis of a behavioral genetic framework integrating different scientific disciplines for the study of negativity toward out-groups.
Modularity, comparative cognition and human uniqueness
Shettleworth, Sara J.
2012-01-01
Darwin's claim ‘that the difference in mind between man and the higher animals … is certainly one of degree and not of kind’ is at the core of the comparative study of cognition. Recent research provides unprecedented support for Darwin's claim as well as new reasons to question it, stimulating new theories of human cognitive uniqueness. This article compares and evaluates approaches to such theories. Some prominent theories propose sweeping domain-general characterizations of the difference in cognitive capabilities and/or mechanisms between adult humans and other animals. Dual-process theories for some cognitive domains propose that adult human cognition shares simple basic processes with that of other animals while additionally including slower-developing and more explicit uniquely human processes. These theories are consistent with a modular account of cognition and the ‘core knowledge’ account of children's cognitive development. A complementary proposal is that human infants have unique social and/or cognitive adaptations for uniquely human learning. A view of human cognitive architecture as a mosaic of unique and species-general modular and domain-general processes together with a focus on uniquely human developmental mechanisms is consistent with modern evolutionary-developmental biology and suggests new questions for comparative research. PMID:22927578
Modularity, comparative cognition and human uniqueness.
Shettleworth, Sara J
2012-10-01
Darwin's claim 'that the difference in mind between man and the higher animals … is certainly one of degree and not of kind' is at the core of the comparative study of cognition. Recent research provides unprecedented support for Darwin's claim as well as new reasons to question it, stimulating new theories of human cognitive uniqueness. This article compares and evaluates approaches to such theories. Some prominent theories propose sweeping domain-general characterizations of the difference in cognitive capabilities and/or mechanisms between adult humans and other animals. Dual-process theories for some cognitive domains propose that adult human cognition shares simple basic processes with that of other animals while additionally including slower-developing and more explicit uniquely human processes. These theories are consistent with a modular account of cognition and the 'core knowledge' account of children's cognitive development. A complementary proposal is that human infants have unique social and/or cognitive adaptations for uniquely human learning. A view of human cognitive architecture as a mosaic of unique and species-general modular and domain-general processes together with a focus on uniquely human developmental mechanisms is consistent with modern evolutionary-developmental biology and suggests new questions for comparative research. PMID:22927578
Uniqueness Theorem for Black Objects
Rogatko, Marek
2010-06-23
We shall review the current status of uniqueness theorem for black objects in higher dimensional spacetime. At the beginning we consider static charged asymptotically flat spacelike hypersurface with compact interior with both degenerate and non-degenerate components of the event horizon in n-dimensional spacetime. We gave some remarks concerning partial results in proving uniqueness of stationary axisymmetric multidimensional solutions and winding numbers which can uniquely characterize the topology and symmetry structure of black objects.
Lebigre, Christophe; Arcese, Peter; Reid, Jane M
2013-07-01
Age-specific variances and covariances in reproductive success shape the total variance in lifetime reproductive success (LRS), age-specific opportunities for selection, and population demographic variance and effective size. Age-specific (co)variances in reproductive success achieved through different reproductive routes must therefore be quantified to predict population, phenotypic and evolutionary dynamics in age-structured populations. While numerous studies have quantified age-specific variation in mean reproductive success, age-specific variances and covariances in reproductive success, and the contributions of different reproductive routes to these (co)variances, have not been comprehensively quantified in natural populations. We applied 'additive' and 'independent' methods of variance decomposition to complete data describing apparent (social) and realised (genetic) age-specific reproductive success across 11 cohorts of socially monogamous but genetically polygynandrous song sparrows (Melospiza melodia). We thereby quantified age-specific (co)variances in male within-pair and extra-pair reproductive success (WPRS and EPRS) and the contributions of these (co)variances to the total variances in age-specific reproductive success and LRS. 'Additive' decomposition showed that within-age and among-age (co)variances in WPRS across males aged 2-4 years contributed most to the total variance in LRS. Age-specific (co)variances in EPRS contributed relatively little. However, extra-pair reproduction altered age-specific variances in reproductive success relative to the social mating system, and hence altered the relative contributions of age-specific reproductive success to the total variance in LRS. 'Independent' decomposition showed that the (co)variances in age-specific WPRS, EPRS and total reproductive success, and the resulting opportunities for selection, varied substantially across males that survived to each age. Furthermore, extra-pair reproduction increased
Respiratory infections unique to Asia.
Tsang, Kenneth W; File, Thomas M
2008-11-01
Asia is a highly heterogeneous region with vastly different cultures, social constitutions and populations affected by a wide spectrum of respiratory diseases caused by tropical pathogens. Asian patients with community-acquired pneumonia differ from their Western counterparts in microbiological aetiology, in particular the prominence of Gram-negative organisms, Mycobacterium tuberculosis, Burkholderia pseudomallei and Staphylococcus aureus. In addition, the differences in socioeconomic and health-care infrastructures limit the usefulness of Western management guidelines for pneumonia in Asia. The importance of emerging infectious diseases such as severe acute respiratory syndrome and avian influenza infection remain as close concerns for practising respirologists in Asia. Specific infections such as melioidosis, dengue haemorrhagic fever, scrub typhus, leptospirosis, salmonellosis, penicilliosis marneffei, malaria, amoebiasis, paragonimiasis, strongyloidiasis, gnathostomiasis, trinchinellosis, schistosomiasis and echinococcosis occur commonly in Asia and manifest with a prominent respiratory component. Pulmonary eosinophilia, endemic in parts of Asia, could occur with a wide range of tropical infections. Tropical eosinophilia is believed to be a hyper-sensitivity reaction to degenerating microfilariae trapped in the lungs. This article attempts to address the key respiratory issues in these respiratory infections unique to Asia and highlight the important diagnostic and management issues faced by practising respirologists.
Estimating the encounter rate variance in distance sampling
Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.
2009-01-01
The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.
Variance analysis. Part II, The use of computers.
Finkler, S A
1991-09-01
This is the second in a two-part series on variance analysis. In the first article (JONA, July/August 1991), the author discussed flexible budgeting, including the calculation of price, quantity, volume, and acuity variances. In this second article, the author focuses on the use of computers by nurse managers to aid in the process of calculating, understanding, and justifying variances. PMID:1919788
A Note on Noncentrality Parameters for Contrast Tests in a One-Way Analysis of Variance
ERIC Educational Resources Information Center
Liu, Xiaofeng Steven
2010-01-01
The noncentrality parameter for a contrast test in a one-way analysis of variance is based on the dot product of 2 vectors whose geometric meaning in a Euclidian space offers mnemonic hints about its constituents. Additionally, the noncentrality parameters for a set of orthogonal contrasts sum up to the noncentrality parameter for the omnibus "F"…
Multiperiod Mean-Variance Portfolio Optimization via Market Cloning
Ankirchner, Stefan; Dermoune, Azzouz
2011-08-15
The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.
Network Structure and Biased Variance Estimation in Respondent Driven Sampling
Verdery, Ashton M.; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J.
2015-01-01
This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927
40 CFR 190.11 - Variances for unusual operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified...
Network Structure and Biased Variance Estimation in Respondent Driven Sampling.
Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J
2015-01-01
This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927
Network Structure and Biased Variance Estimation in Respondent Driven Sampling.
Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J
2015-01-01
This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.
The phenotypic variance gradient – a novel concept
Pertoldi, Cino; Bundgaard, Jørgen; Loeschcke, Volker; Barker, James Stuart Flinton
2014-01-01
Evolutionary ecologists commonly use reaction norms, which show the range of phenotypes produced by a set of genotypes exposed to different environments, to quantify the degree of phenotypic variance and the magnitude of plasticity of morphometric and life-history traits. Significant differences among the values of the slopes of the reaction norms are interpreted as significant differences in phenotypic plasticity, whereas significant differences among phenotypic variances (variance or coefficient of variation) are interpreted as differences in the degree of developmental instability or canalization. We highlight some potential problems with this approach to quantifying phenotypic variance and suggest a novel and more informative way to plot reaction norms: namely “a plot of log (variance) on the y-axis versus log (mean) on the x-axis, with a reference line added”. This approach gives an immediate impression of how the degree of phenotypic variance varies across an environmental gradient, taking into account the consequences of the scaling effect of the variance with the mean. The evolutionary implications of the variation in the degree of phenotypic variance, which we call a “phenotypic variance gradient”, are discussed together with its potential interactions with variation in the degree of phenotypic plasticity and canalization. PMID:25540685
2012-09-01
This final rule adopts the standard for a national unique health plan identifier (HPID) and establishes requirements for the implementation of the HPID. In addition, it adopts a data element that will serve as an other entity identifier (OEID), or an identifier for entities that are not health plans, health care providers, or individuals, but that need to be identified in standard transactions. This final rule also specifies the circumstances under which an organization covered health care provider must require certain noncovered individual health care providers who are prescribers to obtain and disclose a National Provider Identifier (NPI). Lastly, this final rule changes the compliance date for the International Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM) for diagnosis coding, including the Official ICD-10-CM Guidelines for Coding and Reporting, and the International Classification of Diseases, 10th Revision, Procedure Coding System (ICD-10-PCS) for inpatient hospital procedure coding, including the Official ICD-10-PCS Guidelines for Coding and Reporting, from October 1, 2013 to October 1, 2014. PMID:22950146
A NEW VARIANCE ESTIMATOR FOR PARAMETERS OF SEMI-PARAMETRIC GENERALIZED ADDITIVE MODELS. (R829213)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Exploring Unique Roles for Psychologists
ERIC Educational Resources Information Center
Ahmed, Mohiuddin; Boisvert, Charles M.
2005-01-01
This paper presents comments on "Psychological Treatments" by D. H. Barlow. Barlow highlighted unique roles that psychologists can play in mental health service delivery by providing psychological treatments--treatments that psychologists would be uniquely qualified to design and deliver. In support of Barlow's position, the authors draw from…
ERIC Educational Resources Information Center
Shipman, Barbara A.
2013-01-01
This article analyzes four questions on the meaning of uniqueness that have contrasting answers in common language versus mathematical language. The investigations stem from a scenario in which students interpreted uniqueness according to a definition from standard English, that is, different from the mathematical meaning, in defining an injective…
CYP1B1: a unique gene with unique characteristics.
Faiq, Muneeb A; Dada, Rima; Sharma, Reetika; Saluja, Daman; Dada, Tanuj
2014-01-01
CYP1B1, a recently described dioxin inducible oxidoreductase, is a member of the cytochrome P450 superfamily involved in the metabolism of estradiol, retinol, benzo[a]pyrene, tamoxifen, melatonin, sterols etc. It plays important roles in numerous physiological processes and is expressed at mRNA level in many tissues and anatomical compartments. CYP1B1 has been implicated in scores of disorders. Analyses of the recent studies suggest that CYP1B1 can serve as a universal/ideal cancer marker and a candidate gene for predictive diagnosis. There is plethora of literature available about certain aspects of CYP1B1 that have not been interpreted, discussed and philosophized upon. The present analysis examines CYP1B1 as a peculiar gene with certain distinctive characteristics like the uniqueness in its chromosomal location, gene structure and organization, involvement in developmentally important disorders, tissue specific, not only expression, but splicing, potential as a universal cancer marker due to its involvement in key aspects of cellular metabolism, use in diagnosis and predictive diagnosis of various diseases and the importance and function of CYP1B1 mRNA in addition to the regular translation. Also CYP1B1 is very difficult to express in heterologous expression systems, thereby, halting its functional studies. Here we review and analyze these exceptional and startling characteristics of CYP1B1 with inputs from our own experiences in order to get a better insight into its molecular biology in health and disease. This may help to further understand the etiopathomechanistic aspects of CYP1B1 mediated diseases paving way for better research strategies and improved clinical management. PMID:25658124
Lande, Russell; Porcher, Emmanuelle
2015-01-01
We analyze two models of the maintenance of quantitative genetic variance in a mixed-mating system of self-fertilization and outcrossing. In both models purely additive genetic variance is maintained by mutation and recombination under stabilizing selection on the phenotype of one or more quantitative characters. The Gaussian allele model (GAM) involves a finite number of unlinked loci in an infinitely large population, with a normal distribution of allelic effects at each locus within lineages selfed for τ consecutive generations since their last outcross. The infinitesimal model for partial selfing (IMS) involves an infinite number of loci in a large but finite population, with a normal distribution of breeding values in lineages of selfing age τ. In both models a stable equilibrium genetic variance exists, the outcrossed equilibrium, nearly equal to that under random mating, for all selfing rates, r, up to critical value, r^, the purging threshold, which approximately equals the mean fitness under random mating relative to that under complete selfing. In the GAM a second stable equilibrium, the purged equilibrium, exists for any positive selfing rate, with genetic variance less than or equal to that under pure selfing; as r increases above r^ the outcrossed equilibrium collapses sharply to the purged equilibrium genetic variance. In the IMS a single stable equilibrium genetic variance exists at each selfing rate; as r increases above r^ the equilibrium genetic variance drops sharply and then declines gradually to that maintained under complete selfing. The implications for evolution of selfing rates, and for adaptive evolution and persistence of predominantly selfing species, provide a theoretical basis for the classical view of Stebbins that predominant selfing constitutes an “evolutionary dead end.” PMID:25969460
Fractional Brownian Motion with Stochastic Variance:. Modeling Absolute Returns in STOCK Markets
NASA Astrophysics Data System (ADS)
Roman, H. E.; Porto, M.
We discuss a model for simulating a long-time memory in time series characterized in addition by a stochastic variance. The model is based on a combination of fractional Brownian motion (FBM) concepts, for dealing with the long-time memory, with an autoregressive scheme with conditional heteroskedasticity (ARCH), responsible for the stochastic variance of the series, and is denoted as FBMARCH. Unlike well-known fractionally integrated autoregressive models, FBMARCH admits finite second moments. The resulting probability distribution functions have power-law tails with exponents similar to ARCH models. This idea is applied to the description of long-time autocorrelations of absolute returns ubiquitously observed in stock markets.
Broyles, R W; Lay, C M
1982-12-01
This paper examines an unfavorable cost variance in an institution which employs multiple resources to provide stay specific and ancillary services to patients presenting multiple diagnoses. It partitions the difference between actual and expected costs into components that are the responsibility of an identifiable individual or group of individuals. The analysis demonstrates that the components comprising an unfavorable cost variance are attributable to factor prices, the use of real resources, the mix of patients, and the composition of care provided by the institution. In addition, the interactive effects of these factors are also identified. PMID:7183731
Spencer, Michael
1974-01-01
Food additives are discussed from the food technology point of view. The reasons for their use are summarized: (1) to protect food from chemical and microbiological attack; (2) to even out seasonal supplies; (3) to improve their eating quality; (4) to improve their nutritional value. The various types of food additives are considered, e.g. colours, flavours, emulsifiers, bread and flour additives, preservatives, and nutritional additives. The paper concludes with consideration of those circumstances in which the use of additives is (a) justified and (b) unjustified. PMID:4467857
NASA Technical Reports Server (NTRS)
Longman, Richard W.; Bergmann, Martin; Juang, Jer-Nan
1988-01-01
For the ERA system identification algorithm, perturbation methods are used to develop expressions for variance and bias of the identified modal parameters. Based on the statistics of the measurement noise, the variance results serve as confidence criteria by indicating how likely the true parameters are to lie within any chosen interval about their identified values. This replaces the use of expensive and time-consuming Monte Carlo computer runs to obtain similar information. The bias estimates help guide the ERA user in his choice of which data points to use and how much data to use in order to obtain the best results, performing the trade-off between the bias and scatter. Also, when the uncertainty in the bias is sufficiently small, the bias information can be used to correct the ERA results. In addition, expressions for the variance and bias of the singular values serve as tools to help the ERA user decide the proper modal order.
Bogaerts, Louisa; Siegelman, Noam; Frost, Ram
2016-08-01
What determines individuals' efficacy in detecting regularities in visual statistical learning? Our theoretical starting point assumes that the variance in performance of statistical learning (SL) can be split into the variance related to efficiency in encoding representations within a modality and the variance related to the relative computational efficiency of detecting the distributional properties of the encoded representations. Using a novel methodology, we dissociated encoding from higher-order learning factors, by independently manipulating exposure duration and transitional probabilities in a stream of visual shapes. Our results show that the encoding of shapes and the retrieving of their transitional probabilities are not independent and additive processes, but interact to jointly determine SL performance. The theoretical implications of these findings for a mechanistic explanation of SL are discussed.
Bogaerts, Louisa; Siegelman, Noam; Frost, Ram
2016-08-01
What determines individuals' efficacy in detecting regularities in visual statistical learning? Our theoretical starting point assumes that the variance in performance of statistical learning (SL) can be split into the variance related to efficiency in encoding representations within a modality and the variance related to the relative computational efficiency of detecting the distributional properties of the encoded representations. Using a novel methodology, we dissociated encoding from higher-order learning factors, by independently manipulating exposure duration and transitional probabilities in a stream of visual shapes. Our results show that the encoding of shapes and the retrieving of their transitional probabilities are not independent and additive processes, but interact to jointly determine SL performance. The theoretical implications of these findings for a mechanistic explanation of SL are discussed. PMID:26743060
On discrete stochastic processes with long-lasting time dependence in the variance
NASA Astrophysics Data System (ADS)
Queirós, S. M. D.
2008-11-01
In this manuscript, we analytically and numerically study statistical properties of an heteroskedastic process based on the celebrated ARCH generator of random variables whose variance is defined by a memory of qm-exponencial, form (eqm=1 x=ex). Specifically, we inspect the self-correlation function of squared random variables as well as the kurtosis. In addition, by numerical procedures, we infer the stationary probability density function of both of the heteroskedastic random variables and the variance, the multiscaling properties, the first-passage times distribution, and the dependence degree. Finally, we introduce an asymmetric variance version of the model that enables us to reproduce the so-called leverage effect in financial markets.
Marini, Federico; de Beer, Dalene; Joubert, Elizabeth; Walczak, Beata
2015-07-31
Direct application of popular approaches, e.g., Principal Component Analysis (PCA) or Partial Least Squares (PLS) to chromatographic data originating from a well-designed experimental study including more than one factor is not recommended. In the case of a well-designed experiment involving two or more factors (crossed or nested), data are usually decomposed into the contributions associated with the studied factors (and with their interactions), and the individual effect matrices are then analyzed using, e.g., PCA, as in the case of ASCA (analysis of variance combined with simultaneous component analysis). As an alternative to the ASCA method, we propose the application of PLS followed by target projection (TP), which allows a one-factor representation of the model for each column in the design dummy matrix. PLS application follows after proper deflation of the experimental matrix, i.e., to what are called the residuals under the reduced ANOVA model. The proposed approach (ANOVA-TP) is well suited for the study of designed chromatographic data of complex samples. It allows testing of statistical significance of the studied effects, 'biomarker' identification, and enables straightforward visualization and accurate estimation of between- and within-class variance. The proposed approach has been successfully applied to a case study aimed at evaluating the effect of pasteurization on the concentrations of various phenolic constituents of rooibos tea of different quality grades and its outcomes have been compared to those of ASCA.
ERIC Educational Resources Information Center
Castellanos-Ryan, Natalie; Conrod, Patricia J.
2011-01-01
Externalising behaviours such as substance misuse (SM) and conduct disorder (CD) symptoms highly co-ocurr in adolescence. While disinhibited personality traits have been consistently linked to externalising behaviours there is evidence that these traits may relate differentially to SM and CD. The current study aimed to assess whether this was the…
29 CFR 1904.38 - Variances from the recordkeeping rule.
Code of Federal Regulations, 2010 CFR
2010-07-01
... process your variance petition. (i) The Assistant Secretary will offer your employees and their authorized... the facts or conduct that may warrant revocation of your variance; and (ii) Provide you, your employees, and authorized employee representatives with an opportunity to participate in the...
Characterizing the evolution of genetic variance using genetic covariance tensors.
Hine, Emma; Chenoweth, Stephen F; Rundle, Howard D; Blows, Mark W
2009-06-12
Determining how genetic variance changes under selection in natural populations has proved to be a very resilient problem in evolutionary genetics. In the same way that understanding the availability of genetic variance within populations requires the simultaneous consideration of genetic variance in sets of functionally related traits, determining how genetic variance changes under selection in natural populations will require ascertaining how genetic variance-covariance (G) matrices evolve. Here, we develop a geometric framework using higher-order tensors, which enables the empirical characterization of how G matrices have diverged among populations. We then show how divergence among populations in genetic covariance structure can then be associated with divergence in selection acting on those traits using key equations from evolutionary theory. Using estimates of G matrices of eight male sexually selected traits from nine geographical populations of Drosophila serrata, we show that much of the divergence in genetic variance occurred in a single trait combination, a conclusion that could not have been reached by examining variation among the individual elements of the nine G matrices. Divergence in G was primarily in the direction of the major axes of genetic variance within populations, suggesting that genetic drift may be a major cause of divergence in genetic variance among these populations.
An Analysis of Variance Framework for Matrix Sampling.
ERIC Educational Resources Information Center
Sirotnik, Kenneth
Significant cost savings can be achieved with the use of matrix sampling in estimating population parameters from psychometric data. The statistical design is intuitively simple, using the framework of the two-way classification analysis of variance technique. For example, the mean and variance are derived from the performance of a certain grade…
A Study of Variance Estimation Methods. Working Paper Series.
ERIC Educational Resources Information Center
Zhang, Fan; Weng, Stanley; Salvucci, Sameena; Hu, Ming-xiu
This working paper contains reports of five studies of variance estimation methods. The first, An Empirical Study of Poststratified Estimator, by Fan Zhang uses data from the National Household Education Survey to illustrate use of poststratified estimation. The second paper, BRR Variance Estimation Using BPLX Hadamard Procedure, by Stanley Weng…
Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances
ERIC Educational Resources Information Center
Jan, Show-Li; Shieh, Gwowen
2014-01-01
The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…
Conceptual Complexity and the Bias/Variance Tradeoff
ERIC Educational Resources Information Center
Briscoe, Erica; Feldman, Jacob
2011-01-01
In this paper we propose that the conventional dichotomy between exemplar-based and prototype-based models of concept learning is helpfully viewed as an instance of what is known in the statistical learning literature as the "bias/variance tradeoff". The bias/variance tradeoff can be thought of as a sliding scale that modulates how closely any…
29 CFR 1905.5 - Effect of variances.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 5 2010-07-01 2010-07-01 false Effect of variances. 1905.5 Section 1905.5 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RULES OF PRACTICE FOR VARIANCES, LIMITATIONS, VARIATIONS, TOLERANCES, AND EXEMPTIONS UNDER THE...
Code of Federal Regulations, 2010 CFR
2010-07-01
... same circumstances in which variances may be granted under sections 6(b)(6)(A) or 6(d) of the Williams... the Williams-Steiger Occupational Safety and Health Act of 1970, and any variance from a standard... the Williams-Steiger Occupational Safety and Health Act of 1970. In accordance with the...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 36 Parks, Forests, and Public Property 1 2012-07-01 2012-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 36 Parks, Forests, and Public Property 1 2013-07-01 2013-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 36 Parks, Forests, and Public Property 1 2011-07-01 2011-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 36 Parks, Forests, and Public Property 1 2014-07-01 2014-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
Evaluation of Mean and Variance Integrals without Integration
ERIC Educational Resources Information Center
Joarder, A. H.; Omar, M. H.
2007-01-01
The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…
Productive Failure in Learning the Concept of Variance
ERIC Educational Resources Information Center
Kapur, Manu
2012-01-01
In a study with ninth-grade mathematics students on learning the concept of variance, students experienced either direct instruction (DI) or productive failure (PF), wherein they were first asked to generate a quantitative index for variance without any guidance before receiving DI on the concept. Whereas DI students relied only on the canonical…
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 24 2013-07-01 2013-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....
A Variance Explanation Paradox: When a Little Is a Lot.
ERIC Educational Resources Information Center
Abelson, Robert P.
1985-01-01
Argues that percent variance explanation is a misleading index of the influence of systematic factors in cases where there are processes by which individually tiny influences cumulate to produce meaningful outcomes. An example is the computation of percentage of variance in batting performance among major league baseball players. (Author/CB)
On the Endogeneity of the Mean-Variance Efficient Frontier.
ERIC Educational Resources Information Center
Somerville, R. A.; O'Connell, Paul G. J.
2002-01-01
Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…
Utility functions predict variance and skewness risk preferences in monkeys.
Genest, Wilfried; Stauffer, William R; Schultz, Wolfram
2016-07-26
Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals' preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals' preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys' choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences. PMID:27402743
Utility functions predict variance and skewness risk preferences in monkeys
Genest, Wilfried; Stauffer, William R.; Schultz, Wolfram
2016-01-01
Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals’ preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals’ preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys’ choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences. PMID:27402743
Utility functions predict variance and skewness risk preferences in monkeys.
Genest, Wilfried; Stauffer, William R; Schultz, Wolfram
2016-07-26
Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals' preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals' preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys' choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences.
Variance After-Effects Distort Risk Perception in Humans.
Payzan-LeNestour, Elise; Balleine, Bernard W; Berrada, Tony; Pearson, Joel
2016-06-01
In many contexts, decision-making requires an accurate representation of outcome variance-otherwise known as "risk" in economics. Conventional economic theory assumes this representation to be perfect, thereby focusing on risk preferences rather than risk perception per se [1-3] (but see [4]). However, humans often misrepresent their physical environment. Perhaps the most striking of such misrepresentations are the many well-known sensory after-effects, which most commonly involve visual properties, such as color, contrast, size, and motion. For example, viewing downward motion of a waterfall induces the anomalous biased experience of upward motion during subsequent viewing of static rocks to the side [5]. Given that after-effects are pervasive, occurring across a wide range of time horizons [6] and stimulus dimensions (including properties such as face perception [7, 8], gender [9], and numerosity [10]), and that some evidence exists that neurons show adaptation to variance in the sole visual feature of motion [11], we were interested in assessing whether after-effects distort variance perception in humans. We found that perceived variance is decreased after prolonged exposure to high variance and increased after exposure to low variance within a number of different visual representations of variance. We demonstrate these after-effects occur across very different visual representations of variance, suggesting that these effects are not sensory, but operate at a high (cognitive) level of information processing. These results suggest, therefore, that variance constitutes an independent cognitive property and that prolonged exposure to extreme variance distorts risk perception-a fundamental challenge for economic theory and practice. PMID:27161500
Uniqueness of the momentum map
NASA Astrophysics Data System (ADS)
Esposito, Chiara; Nest, Ryszard
2016-08-01
We give a detailed discussion of existence and uniqueness of the momentum map associated to Poisson Lie actions, which was defined by Lu. We introduce a weaker notion of momentum map, called infinitesimal momentum map, which is defined on one-forms and we analyze its integrability to the Lu's momentum map. Finally, the uniqueness of the Lu's momentum map is studied by describing, explicitly, the tangent space to the space of momentum maps.
Code of Federal Regulations, 2011 CFR
2011-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260.33 Section 260.33 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES... from classification as a solid waste, for variances to be classified as a boiler, or for...
Code of Federal Regulations, 2010 CFR
2010-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260.33 Section 260.33 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES... from classification as a solid waste, for variances to be classified as a boiler, or for...
Code of Federal Regulations, 2014 CFR
2014-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
Code of Federal Regulations, 2012 CFR
2012-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
Code of Federal Regulations, 2013 CFR
2013-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
The Placenta Harbors a Unique Microbiome
Aagaard, Kjersti; Ma, Jun; Antony, Kathleen M.; Ganu, Radhika; Petrosino, Joseph; Versalovic, James
2016-01-01
Humans and their microbiomes have coevolved as a physiologic community composed of distinct body site niches with metabolic and antigenic diversity. The placental microbiome has not been robustly interrogated, despite recent demonstrations of intracellular bacteria with diverse metabolic and immune regulatory functions. A population-based cohort of placental specimens collected under sterile conditions from 320 subjects with extensive clinical data was established for comparative 16S ribosomal DNA–based and whole-genome shotgun (WGS) metagenomic studies. Identified taxa and their gene carriage patterns were compared to other human body site niches, including the oral, skin, airway (nasal), vaginal, and gut microbiomes from nonpregnant controls. We characterized a unique placental microbiome niche, composed of nonpathogenic commensal microbiota from the Firmicutes, Tenericutes, Proteobacteria, Bacteroidetes, and Fusobacteria phyla. In aggregate, the placental microbiome profiles were most akin (Bray-Curtis dissimilarity <0.3) to the human oral microbiome. 16S-based operational taxonomic unit analyses revealed associations of the placental microbiome with a remote history of antenatal infection (permutational multivariate analysis of variance, P = 0.006), such as urinary tract infection in the first trimester, as well as with preterm birth <37 weeks (P = 0.001). PMID:24848255
NASA Astrophysics Data System (ADS)
García-Pareja, S.; Vilches, M.; Lallena, A. M.
2007-09-01
The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the "hot" regions of the accelerator, an information which is basic to develop a source model for this therapy tool.
Wu, R L
1996-07-01
A quantitative genetic model, that uses known family structure with clonal replicates to separate genetic variance into its additive, dominance and epistatic components, is available in the current literature. Making use of offspring testing, this model is based on the theory that components of variance from the linear model of an experimental design may be expressed in terms of expected covariances among relatives. However, if interactions between a pair of quantitative trait loci (QTLs) explain a large proportion of the total epistasis, it will seriously overestimate the additive and dominance variances but underestimate the epistatic variance. In the present paper, a new model is developed to manipulate this problem by combining parental and offspring material into the same test. Under the condition described above, the new model can provide an accurate estimate for additive x additive variances. Also, its accuracy in estimating dominance and total epistatic variances is much greater than the accuracy of the previous model. However, if there is obvious evidence showing the major contribution of high-order interactions, especially among ≥ 4QTLs, to the total epistasis, the previous model is more appropriate to partition the genetic variance for a quantitative trait. The re-analysis of an example from a factorial mating design in poplar shows large differences in estimating variance components between the new and previous models when two different assumptions (lowvs high-order epistatic interactions) are used. The new model will be an alternative to estimating the mode of quantitative inheritance for species, especially for longlived, predominantly outcrossing forest trees, that can be clonally replicated.
Wavelet variance analysis for random fields on a regular lattice.
Mondal, Debashis; Percival, Donald B
2012-02-01
There has been considerable recent interest in using wavelets to analyze time series and images that can be regarded as realizations of certain 1-D and 2-D stochastic processes on a regular lattice. Wavelets give rise to the concept of the wavelet variance (or wavelet power spectrum), which decomposes the variance of a stochastic process on a scale-by-scale basis. The wavelet variance has been applied to a variety of time series, and a statistical theory for estimators of this variance has been developed. While there have been applications of the wavelet variance in the 2-D context (in particular, in works by Unser in 1995 on wavelet-based texture analysis for images and by Lark and Webster in 2004 on analysis of soil properties), a formal statistical theory for such analysis has been lacking. In this paper, we develop the statistical theory by generalizing and extending some of the approaches developed for time series, thus leading to a large-sample theory for estimators of 2-D wavelet variances. We apply our theory to simulated data from Gaussian random fields with exponential covariances and from fractional Brownian surfaces. We demonstrate that the wavelet variance is potentially useful for texture discrimination. We also use our methodology to analyze images of four types of clouds observed over the southeast Pacific Ocean.
Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation
NASA Technical Reports Server (NTRS)
Wu, Dong L.; Eckermann, Stephen D.
2008-01-01
The gravity wave (GW)-resolving capabilities of 118-GHz saturated thermal radiances acquired throughout the stratosphere by the Microwave Limb Sounder (MLS) on the Aura satellite are investigated and initial results presented. Because the saturated (optically thick) radiances resolve GW perturbations from a given altitude at different horizontal locations, variances are evaluated at 12 pressure altitudes between 21 and 51 km using the 40 saturated radiances found at the bottom of each limb scan. Forward modeling simulations show that these variances are controlled mostly by GWs with vertical wavelengths z 5 km and horizontal along-track wavelengths of y 100-200 km. The tilted cigar-shaped three-dimensional weighting functions yield highly selective responses to GWs of high intrinsic frequency that propagate toward the instrument. The latter property is used to infer the net meridional component of GW propagation by differencing the variances acquired from ascending (A) and descending (D) orbits. Because of improved vertical resolution and sensitivity, Aura MLS GW variances are 5?8 times larger than those from the Upper Atmosphere Research Satellite (UARS) MLS. Like UARS MLS variances, monthly-mean Aura MLS variances in January and July 2005 are enhanced when local background wind speeds are large, due largely to GW visibility effects. Zonal asymmetries in variance maps reveal enhanced GW activity at high latitudes due to forcing by flow over major mountain ranges and at tropical and subtropical latitudes due to enhanced deep convective generation as inferred from contemporaneous MLS cloud-ice data. At 21-28-km altitude (heights not measured by the UARS MLS), GW variance in the tropics is systematically enhanced and shows clear variations with the phase of the quasi-biennial oscillation, in general agreement with GW temperature variances derived from radiosonde, rocketsonde, and limb-scan vertical profiles.
Comparison of multiplicative heterogeneous variance adjustment models for genetic evaluations.
Márkus, Sz; Mäntysaari, E A; Strandén, I; Eriksson, J-Å; Lidauer, M H
2014-06-01
Two heterogeneous variance adjustment methods and two variance models were compared in a simulation study. The method used for heterogeneous variance adjustment in the Nordic test-day model, which is a multiplicative method based on Meuwissen (J. Dairy Sci., 79, 1996, 310), was compared with a restricted multiplicative method where the fixed effects were not scaled. Both methods were tested with two different variance models, one with a herd-year and the other with a herd-year-month random effect. The simulation study was built on two field data sets from Swedish Red dairy cattle herds. For both data sets, 200 herds with test-day observations over a 12-year period were sampled. For one data set, herds were sampled randomly, while for the other, each herd was required to have at least 10 first-calving cows per year. The simulations supported the applicability of both methods and models, but the multiplicative mixed model was more sensitive in the case of small strata sizes. Estimation of variance components for the variance models resulted in different parameter estimates, depending on the applied heterogeneous variance adjustment method and variance model combination. Our analyses showed that the assumption of a first-order autoregressive correlation structure between random-effect levels is reasonable when within-herd heterogeneity is modelled by year classes, but less appropriate for within-herd heterogeneity by month classes. Of the studied alternatives, the multiplicative method and a variance model with a random herd-year effect were found most suitable for the Nordic test-day model for dairy cattle evaluation.
Rundek, Tatjana; Blanton, Susan H.; Bartels, Susanne; Dong, Chuanhui; Raval, Ami; Demmer, Ryan T.; Cabral, Digna; Elkind, Mitchell S.V.; Sacco, Ralph L.; Desvarieux, Moise
2013-01-01
Background and Purpose Carotid Intima-Media Thickness (cIMT) was a widely accepted ultrasound marker of subclinical atherosclerosis in the past. Although traditional risk factors may explain approximately 50% of the variance in plaque burden, they may not explain such a high proportion of the variance in IMT, especially when measured in plaque free-locations. We aimed this study to identify individuals with cIMT unexplained by traditional risk factors for future environmental and genetic research. Methods As part of the Northern Manhattan Study, 1,790 stroke-free individuals (mean age 69±9; 60% women; 61% Hispanic, 19% black, 18% white) were assessed for cIMT using B-mode carotid ultrasound. Multiple linear regression models were evaluated: (1) incorporating pre-specified traditional risk factors; and (2) including less traditional factors, such as inflammation biomarkers, adiponectin, homocysteine and kidney function. Standardized cIMT residual scores were constructed to select individuals with unexplained cIMT. Results Mean total cIMT was 0.92±0.09 mm. The traditional model explained 11% of the variance in cIMT. Age (7%), male sex (3%), glucose (<1%), pack years of smoking (<1%), and LDL-cholesterol (<1%) were significant contributing factors. The model including inflammatory biomarkers explained 16% of the variance in cIMT. Adiponectin was the only additional significant contributor to the variance in cIMT. We identified 358 (20%) individuals with cIMT unexplained by the investigated risk factors. Conclusions Vascular risk factors explain only a small proportion of variance in cIMT. Identification of novel genetic and environmental factors underlying unexplained subclinical atherosclerosis is of outmost importance for future effective prevention of vascular disease. PMID:23704105
Mesoscale Gravity Wave Variances from AMSU-A Radiances
NASA Technical Reports Server (NTRS)
Wu, Dong L.
2004-01-01
A variance analysis technique is developed here to extract gravity wave (GW) induced temperature fluctuations from NOAA AMSU-A (Advanced Microwave Sounding Unit-A) radiance measurements. By carefully removing the instrument/measurement noise, the algorithm can produce reliable GW variances with the minimum detectable value as small as 0.1 K2. Preliminary analyses with AMSU-A data show GW variance maps in the stratosphere have very similar distributions to those found with the UARS MLS (Upper Atmosphere Research Satellite Microwave Limb Sounder). However, the AMSU-A offers better horizontal and temporal resolution for observing regional GW variability, such as activity over sub-Antarctic islands.
Houde, Aimee Lee S; Pitcher, Trevor E
2016-03-01
Full factorial breeding designs are useful for quantifying the amount of additive genetic, nonadditive genetic, and maternal variance that explain phenotypic traits. Such variance estimates are important for examining evolutionary potential. Traditionally, full factorial mating designs have been analyzed using a two-way analysis of variance, which may produce negative variance values and is not suited for unbalanced designs. Mixed-effects models do not produce negative variance values and are suited for unbalanced designs. However, extracting the variance components, calculating significance values, and estimating confidence intervals and/or power values for the components are not straightforward using traditional analytic methods. We introduce fullfact - an R package that addresses these issues and facilitates the analysis of full factorial mating designs with mixed-effects models. Here, we summarize the functions of the fullfact package. The observed data functions extract the variance explained by random and fixed effects and provide their significance. We then calculate the additive genetic, nonadditive genetic, and maternal variance components explaining the phenotype. In particular, we integrate nonnormal error structures for estimating these components for nonnormal data types. The resampled data functions are used to produce bootstrap-t confidence intervals, which can then be plotted using a simple function. We explore the fullfact package through a worked example. This package will facilitate the analyses of full factorial mating designs in R, especially for the analysis of binary, proportion, and/or count data types and for the ability to incorporate additional random and fixed effects and power analyses.
Vance Tartar: a unique biologist.
Frankel, J; Whiteley, A H
1993-01-01
Vance Tartar (1911-1991) has made major discoveries concerning morphogenesis, patterning, and nucleocytoplasmic relations in the giant ciliate Stentor coeruleus, mostly by means of hand-grafting using glass microneedles. This article provides a chronological account of the major events of Vance Tartar's life, a brief description of some of his major scientific achievements, and a discussion of his distinctive personality and multifaceted interests. It concludes with a consideration of how his unique style of life and work contributed to his equally unique scientific contributions. PMID:8457795
The liberal illusion of uniqueness.
Stern, Chadly; West, Tessa V; Schmitt, Peter G
2014-01-01
In two studies, we demonstrated that liberals underestimate their similarity to other liberals (i.e., display truly false uniqueness), whereas moderates and conservatives overestimate their similarity to other moderates and conservatives (i.e., display truly false consensus; Studies 1 and 2). We further demonstrated that a fundamental difference between liberals and conservatives in the motivation to feel unique explains this ideological distinction in the accuracy of estimating similarity (Study 2). Implications of the accuracy of consensus estimates for mobilizing liberal and conservative political movements are discussed. PMID:24247730
Regional Variance in Novice Perceptions of Hurricanes
NASA Astrophysics Data System (ADS)
Arthurs, L.; Van Den Broeke, M.
2013-12-01
In order to assess novice understandings of hurricane formation prior to explicit instruction on the topic, a two-question open-ended survey was administered to 337 students enrolled in introductory college-level geoscience courses in Georgia (n=169) and Nebraska (n=168) . Respondents explained in their own words how they think hurricanes form and sketched diagrams that complimented their textual descriptions. The authors developed and iteratively refined a coding rubric for the non-segmented data (whole response). Two raters independently applied this rubric to the entire data set with an initial inter-rater reliability of 71%, and of 100% after discussion of the initially mismatched codes. In addition, responses were segmented and analyzed for common content features. Textual and diagrammatic analyses of responses indicated a broad range of student ideas about hurricane formation, from more novice-like to more expert-like. These findings can assist the design of instructional materials, such as lecture tutorials, that address student misconceptions and facilitate conceptual learning.
Variance Function Partially Linear Single-Index Models1
LIAN, HENG; LIANG, HUA; CARROLL, RAYMOND J.
2014-01-01
We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function. PMID:25642139
40 CFR 190.11 - Variances for unusual operations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified in... interest, and (b) Information is promptly made a matter of public record delineating the nature of...
40 CFR 59.509 - Can I get a variance?
Code of Federal Regulations, 2011 CFR
2011-07-01
...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...
40 CFR 59.509 - Can I get a variance?
Code of Federal Regulations, 2010 CFR
2010-07-01
...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...
40 CFR 59.509 - Can I get a variance?
Code of Federal Regulations, 2013 CFR
2013-07-01
...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...
40 CFR 59.509 - Can I get a variance?
Code of Federal Regulations, 2012 CFR
2012-07-01
...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...
40 CFR 59.509 - Can I get a variance?
Code of Federal Regulations, 2014 CFR
2014-07-01
...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...
A multicomb variance reduction scheme for Monte Carlo semiconductor simulators
Gray, M.G.; Booth, T.E.; Kwan, T.J.T.; Snell, C.M.
1998-04-01
The authors adapt a multicomb variance reduction technique used in neutral particle transport to Monte Carlo microelectronic device modeling. They implement the method in a two-dimensional (2-D) MOSFET device simulator and demonstrate its effectiveness in the study of hot electron effects. The simulations show that the statistical variance of hot electrons is significantly reduced with minimal computational cost. The method is efficient, versatile, and easy to implement in existing device simulators.
... Multiple Health Problems Prevention Join our e-newsletter! Aging & Health A to Z COPD Unique to Older Adults This section provides information ... not a weakness or a normal part of aging. Most people feel better with ... help you can, so that your COPD does not prevent you from living your life ...
Milton: A New, Unique Pallasite
NASA Technical Reports Server (NTRS)
Jones, R. H.; Wasson, J. T.; Larson, T.; Sharp, Z. D.
2003-01-01
The Milton pallasite was found in Missouri, U.S.A. in October, 2000. It consists of a single stone that originally weighed approximately 2040 g. The chemistry of the olivine and metal phases, plus the oxygen isotope ratios of the olivines, differ significantly from other pallasites, making Milton unique. Unfortunately, the meteorite is heavily fractured and weathered.
Rufus Choate: A Unique Orator.
ERIC Educational Resources Information Center
Markham, Reed
Rufus Choate, a Massachusetts lawyer and orator, has been described as a "unique and romantic phenomenon" in America's history. Born in 1799 in Essex, Massachusetts, Choate graduated from Dartmouth College and attended Harvard Law School. Choate's goal was to be the top in his profession. Daniel Webster was Choate's hero. Choate became well…
Uniquely identifying wheat plant structures
Technology Transfer Automated Retrieval System (TEKTRAN)
Uniquely naming wheat (Triticum aestivum L. em Thell) plant parts is useful for communicating plant development research and the effects of environmental stresses on normal wheat development. Over the past 30+ years, several naming systems have been proposed for wheat shoot, leaf, spike, spikelet, ...
The evolution and consequences of sex-specific reproductive variance.
Mullon, Charles; Reuter, Max; Lehmann, Laurent
2014-01-01
Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction.
Variance estimation for systematic designs in spatial surveys.
Fewster, R M
2011-12-01
In spatial surveys for estimating the density of objects in a survey region, systematic designs will generally yield lower variance than random designs. However, estimating the systematic variance is well known to be a difficult problem. Existing methods tend to overestimate the variance, so although the variance is genuinely reduced, it is over-reported, and the gain from the more efficient design is lost. The current approaches to estimating a systematic variance for spatial surveys are to approximate the systematic design by a random design, or approximate it by a stratified design. Previous work has shown that approximation by a random design can perform very poorly, while approximation by a stratified design is an improvement but can still be severely biased in some situations. We develop a new estimator based on modeling the encounter process over space. The new "striplet" estimator has negligible bias and excellent precision in a wide range of simulation scenarios, including strip-sampling, distance-sampling, and quadrat-sampling surveys, and including populations that are highly trended or have strong aggregation of objects. We apply the new estimator to survey data for the spotted hyena (Crocuta crocuta) in the Serengeti National Park, Tanzania, and find that the reported coefficient of variation for estimated density is 20% using approximation by a random design, 17% using approximation by a stratified design, and 11% using the new striplet estimator. This large reduction in reported variance is verified by simulation. PMID:21534940
On variance estimate for covariate adjustment by propensity score analysis.
Zou, Baiming; Zou, Fei; Shuster, Jonathan J; Tighe, Patrick J; Koch, Gary G; Zhou, Haibo
2016-09-10
Propensity score (PS) methods have been used extensively to adjust for confounding factors in the statistical analysis of observational data in comparative effectiveness research. There are four major PS-based adjustment approaches: PS matching, PS stratification, covariate adjustment by PS, and PS-based inverse probability weighting. Though covariate adjustment by PS is one of the most frequently used PS-based methods in clinical research, the conventional variance estimation of the treatment effects estimate under covariate adjustment by PS is biased. As Stampf et al. have shown, this bias in variance estimation is likely to lead to invalid statistical inference and could result in erroneous public health conclusions (e.g., food and drug safety and adverse events surveillance). To address this issue, we propose a two-stage analytic procedure to develop a valid variance estimator for the covariate adjustment by PS analysis strategy. We also carry out a simple empirical bootstrap resampling scheme. Both proposed procedures are implemented in an R function for public use. Extensive simulation results demonstrate the bias in the conventional variance estimator and show that both proposed variance estimators offer valid estimates for the true variance, and they are robust to complex confounding structures. The proposed methods are illustrated for a post-surgery pain study. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26999553
Variance estimation for systematic designs in spatial surveys.
Fewster, R M
2011-12-01
In spatial surveys for estimating the density of objects in a survey region, systematic designs will generally yield lower variance than random designs. However, estimating the systematic variance is well known to be a difficult problem. Existing methods tend to overestimate the variance, so although the variance is genuinely reduced, it is over-reported, and the gain from the more efficient design is lost. The current approaches to estimating a systematic variance for spatial surveys are to approximate the systematic design by a random design, or approximate it by a stratified design. Previous work has shown that approximation by a random design can perform very poorly, while approximation by a stratified design is an improvement but can still be severely biased in some situations. We develop a new estimator based on modeling the encounter process over space. The new "striplet" estimator has negligible bias and excellent precision in a wide range of simulation scenarios, including strip-sampling, distance-sampling, and quadrat-sampling surveys, and including populations that are highly trended or have strong aggregation of objects. We apply the new estimator to survey data for the spotted hyena (Crocuta crocuta) in the Serengeti National Park, Tanzania, and find that the reported coefficient of variation for estimated density is 20% using approximation by a random design, 17% using approximation by a stratified design, and 11% using the new striplet estimator. This large reduction in reported variance is verified by simulation.
Analytic variance estimates of Swank and Fano factors
Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank
2014-07-15
Purpose: Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. Methods: The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data from a Monte Carlo simulation of an indirect x-ray imaging detector. Results: The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. Conclusions: The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.
Harrup, Mason K; Rollins, Harry W
2013-11-26
An additive comprising a phosphazene compound that has at least two reactive functional groups and at least one capping functional group bonded to phosphorus atoms of the phosphazene compound. One of the at least two reactive functional groups is configured to react with cellulose and the other of the at least two reactive functional groups is configured to react with a resin, such as an amine resin of a polycarboxylic acid resin. The at least one capping functional group is selected from the group consisting of a short chain ether group, an alkoxy group, or an aryloxy group. Also disclosed are an additive-resin admixture, a method of treating a wood product, and a wood product.
Rudolf Keller
2004-08-10
In this project, a concept to improve the performance of aluminum production cells by introducing potlining additives was examined and tested. Boron oxide was added to cathode blocks, and titanium was dissolved in the metal pool; this resulted in the formation of titanium diboride and caused the molten aluminum to wet the carbonaceous cathode surface. Such wetting reportedly leads to operational improvements and extended cell life. In addition, boron oxide suppresses cyanide formation. This final report presents and discusses the results of this project. Substantial economic benefits for the practical implementation of the technology are projected, especially for modern cells with graphitized blocks. For example, with an energy savings of about 5% and an increase in pot life from 1500 to 2500 days, a cost savings of $ 0.023 per pound of aluminum produced is projected for a 200 kA pot.
Wonnapinij, Passorn; Chinnery, Patrick F; Samuels, David C
2010-04-01
In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference.
Model-specific tests on variance heterogeneity for detection of potentially interacting genetic loci
2012-01-01
Background Trait variances among genotype groups at a locus are expected to differ in the presence of an interaction between this locus and another locus or environment. A simple maximum test on variance heterogeneity can thus be used to identify potentially interacting single nucleotide polymorphisms (SNPs). Results We propose a multiple contrast test for variance heterogeneity that compares the mean of Levene residuals for each genotype group with their average as an alternative to a global Levene test. We applied this test to a Bogalusa Heart Study dataset to screen for potentially interacting SNPs across the whole genome that influence a number of quantitative traits. A user-friendly implementation of this method is available in the R statistical software package multcomp. Conclusions We show that the proposed multiple contrast test of model-specific variance heterogeneity can be used to test for potential interactions between SNPs and unknown alleles, loci or covariates and provide valuable additional information compared with traditional tests. Although the test is statistically valid for severely unbalanced designs, care is needed in interpreting the results at loci with low allele frequencies. PMID:22808950
Variances in the etiology of drug use among ethnic groups of adolescents.
Beauvais, Frederick; Oetting, E. R.
2002-01-01
OBJECTIVE: This article reviews drug use trends among ethnic groups of adolescents. It identifies similarities and differences in general, and culturally specific variables in particular, that may account for the differences in drug use rates and the consequences of drug use. METHODS: The authors review trends in drug use among minority and nonminority adolescents over the past 25 years and propose an explanatory model for understanding the factors that affect adolescent drug use. Sources of variance examined include factors common to all adolescents, factors unique to certain ethnic groups, temporal influences, location and demographic variables, developmental and socialization factors, and individual characteristics. RESULTS: Most of the variance in adolescent drug use is due to factors that are common across ethnic groups. CONCLUSION: This finding should not overshadow the importance of addressing ethnocultural issues in designing prevention or treatment interventions, however. Although the major factors leading to drug use may be common across ethnic groups, unique elements within a culture can be used effectively in interventions. Interventions also need to address culturally specific issues in order to gain acceptance within a community. PMID:12435823
Theorems on Positive Data: On the Uniqueness of NMF
Laurberg, Hans; Christensen, Mads Græsbøll; Plumbley, Mark D.; Hansen, Lars Kai; Jensen, Søren Holdt
2008-01-01
We investigate the conditions for which nonnegative matrix factorization (NMF) is unique and introduce several theorems which can determine whether the decomposition is in fact unique or not. The theorems are illustrated by several examples showing the use of the theorems and their limitations. We have shown that corruption of a unique NMF matrix by additive noise leads to a noisy estimation of the noise-free unique solution. Finally, we use a stochastic view of NMF to analyze which characterization of the underlying model will result in an NMF with small estimation errors. PMID:18497868
Theorems on positive data: on the uniqueness of NMF.
Laurberg, Hans; Christensen, Mads Graesbøll; Plumbley, Mark D; Hansen, Lars Kai; Jensen, Søren Holdt
2008-01-01
We investigate the conditions for which nonnegative matrix factorization (NMF) is unique and introduce several theorems which can determine whether the decomposition is in fact unique or not. The theorems are illustrated by several examples showing the use of the theorems and their limitations. We have shown that corruption of a unique NMF matrix by additive noise leads to a noisy estimation of the noise-free unique solution. Finally, we use a stochastic view of NMF to analyze which characterization of the underlying model will result in an NMF with small estimation errors.
Moreno-García, Miguel; Lanz-Mendoza, Humberto; Córdoba-Aguilar, Alex
2010-03-01
Immune response can be negatively affected by resource limitation, so it is expected that organisms evolve strategies to minimize the impact of this environmental outcome. Phenotypic plasticity in immune response could represent a genetic response to face such situations. We investigated the effects of high and low quality and quantity of food at the larval stage on two important immune components, phenoloxidase activity (PO) and nitric oxide production (NO) measured in adults of the Dengue vector, Aedes aegypti. We reared families to determine the magnitude and pattern of expression of genetic variance, environmental variance and genotype-by-environment interaction (GEI). In addition, we quantified whether there were differences in plastic immune responses in both sexes. Our results indicated additive variance for PO and NO, but rearing environment did not produce differences among individuals. For NO and PO in males, there were large differences among families in plasticity, as indicated by the different slopes produced by each reaction norm. Therefore, there is additive genetic variation in plasticity for NO production and PO activity. One possible interpretation of these results is that different genotypes may be favored to fight pathogens under the different food quality situations. Males and females showed similar overall GEI strategies but there were differences in PO and NO. Males showed a phenotypic correlation between PO and NO, but we did not find genetic correlations between immune parameters in both sexes.
Not Available
1985-06-01
Consafe is now using a computer-aided design and drafting system adapting its multipurpose support vessels (MSVS) to specific user requirements. The vessels are based on the concept of standard container modules adapted into living quarters, workshops, service units, offices with each application for a specific project demanding a unique mix. There is also the need for constant refurbishment program as service conditions take their toll on the modules. The computer-aided design system is described.
Losdat, Sylvain; Arcese, Peter; Reid, Jane M
2015-09-01
1. Extra-pair reproductive success (EPRS) is a key component of male fitness in socially monogamous systems and could cause selection on female extra-pair reproduction if extra-pair offspring (EPO) inherit high value for EPRS from their successful extra-pair fathers. However, EPRS is itself a composite trait that can be fully decomposed into subcomponents of variation, each of which can be further decomposed into genetic and environmental variances. However, such decompositions have not been implemented in wild populations, impeding evolutionary inference. 2. We first show that EPRS can be decomposed into the product of three life-history subcomponents: the number of broods available to a focal male to sire EPO, the male's probability of siring an EPO in an available brood and the number of offspring in available broods. This decomposition of EPRS facilitates estimation from field data because all subcomponents can be quantified from paternity data without need to quantify extra-pair matings. Our decomposition also highlights that the number of available broods, and hence population structure and demography, might contribute substantially to variance in male EPRS and fitness. 3. We then used 20 years of complete genetic paternity and pedigree data from wild song sparrows (Melospiza melodia) to partition variance in each of the three subcomponents of EPRS, and thereby estimate their additive genetic variance and heritability conditioned on effects of male coefficient of inbreeding, age and social status. 4. All three subcomponents of EPRS showed some degree of within-male repeatability, reflecting combined permanent environmental and genetic effects. Number of available broods and offspring per brood showed low additive genetic variances. The estimated additive genetic variance in extra-pair siring probability was larger, although the 95% credible interval still converged towards zero. Siring probability also showed inbreeding depression and increased with male age
Practice reduces task relevant variance modulation and forms nominal trajectory
Osu, Rieko; Morishige, Ken-ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo
2015-01-01
Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary. PMID:26639942
Detecting pulsars with interstellar scintillation in variance images
NASA Astrophysics Data System (ADS)
Dai, S.; Johnston, S.; Bell, M. E.; Coles, W. A.; Hobbs, G.; Ekers, R. D.; Lenc, E.
2016-11-01
Pulsars are the only cosmic radio sources known to be sufficiently compact to show diffractive interstellar scintillations. Images of the variance of radio signals in both time and frequency can be used to detect pulsars in large-scale continuum surveys using the next generation of synthesis radio telescopes. This technique allows a search over the full field of view while avoiding the need for expensive pixel-by-pixel high time resolution searches. We investigate the sensitivity of detecting pulsars in variance images. We show that variance images are most sensitive to pulsars whose scintillation time-scales and bandwidths are close to the subintegration time and channel bandwidth. Therefore, in order to maximize the detection of pulsars for a given radio continuum survey, it is essential to retain a high time and frequency resolution, allowing us to make variance images sensitive to pulsars with different scintillation properties. We demonstrate the technique with Murchision Widefield Array data and show that variance images can indeed lead to the detection of pulsars by distinguishing them from other radio sources.
Analysis of Variance Components for Genetic Markers with Unphased Genotypes.
Wang, Tao
2016-01-01
An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions.
Practice reduces task relevant variance modulation and forms nominal trajectory
NASA Astrophysics Data System (ADS)
Osu, Rieko; Morishige, Ken-Ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo
2015-12-01
Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary.
Practice reduces task relevant variance modulation and forms nominal trajectory.
Osu, Rieko; Morishige, Ken-ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo
2015-01-01
Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary. PMID:26639942
Increased spatial variance accompanies reorganization of two continental shelf ecosystems.
Litzow, Michael A; Urban, J Daniel; Laurel, Benjamin J
2008-09-01
Phase transitions between alternate stable states in marine ecosystems lead to disruptive changes in ecosystem services, especially fisheries productivity. We used trawl survey data spanning phase transitions in the North Pacific (Gulf of Alaska) and the North Atlantic (Scotian Shelf) to test for increases in ecosystem variability that might provide early warning of such transitions. In both time series, elevated spatial variability in a measure of community composition (ratio of cod [Gadus sp.] abundance to prey abundance) accompanied transitions between ecosystem states, and variability was negatively correlated with distance from the ecosystem transition point. In the Gulf of Alaska, where the phase transition was apparently the result of a sudden perturbation (climate regime shift), variance increased one year before the transition in mean state occurred. On the Scotian Shelf, where ecosystem reorganization was the result of persistent overfishing, a significant increase in variance occurred three years before the transition in mean state was detected. However, we could not reject the alternate explanation that increased variance may also have simply been inherent to the final stable state in that ecosystem. Increased variance has been previously observed around transition points in models, but rarely in real ecosystems, and our results demonstrate the possible management value in tracking the variance of key parameters in exploited ecosystems.
The ALHAMBRA survey: Estimation of the clustering signal encoded in the cosmic variance
NASA Astrophysics Data System (ADS)
López-Sanjuan, C.; Cenarro, A. J.; Hernández-Monteagudo, C.; Arnalte-Mur, P.; Varela, J.; Viironen, K.; Fernández-Soto, A.; Martínez, V. J.; Alfaro, E.; Ascaso, B.; del Olmo, A.; Díaz-García, L. A.; Hurtado-Gil, Ll.; Moles, M.; Molino, A.; Perea, J.; Pović, M.; Aguerri, J. A. L.; Aparicio-Villegas, T.; Benítez, N.; Broadhurst, T.; Cabrera-Caño, J.; Castander, F. J.; Cepa, J.; Cerviño, M.; Cristóbal-Hornillos, D.; González Delgado, R. M.; Husillos, C.; Infante, L.; Márquez, I.; Masegosa, J.; Prada, F.; Quintana, J. M.
2015-10-01
Aims: The relative cosmic variance (σv) is a fundamental source of uncertainty in pencil-beam surveys and, as a particular case of count-in-cell statistics, can be used to estimate the bias between galaxies and their underlying dark-matter distribution. Our goal is to test the significance of the clustering information encoded in the σv measured in the ALHAMBRA survey. Methods: We measure the cosmic variance of several galaxy populations selected with B-band luminosity at 0.35 ≤ z< 1.05 as the intrinsic dispersion in the number density distribution derived from the 48 ALHAMBRA subfields. We compare the observational σv with the cosmic variance of the dark matter expected from the theory, σv,dm. This provides an estimation of the galaxy bias b. Results: The galaxy bias from the cosmic variance is in excellent agreement with the bias estimated by two-point correlation function analysis in ALHAMBRA. This holds for different redshift bins, for red and blue subsamples, and for several B-band luminosity selections. We find that b increases with the B-band luminosity and the redshift, as expected from previous work. Moreover, red galaxies have a larger bias than blue galaxies, with a relative bias of brel = 1.4 ± 0.2. Conclusions: Our results demonstrate that the cosmic variance measured in ALHAMBRA is due to the clustering of galaxies and can be used to characterise the σv affecting pencil-beam surveys. In addition, it can also be used to estimate the galaxy bias b from a method independent of correlation functions. Based on observations collected at the German-Spanish Astronomical Center, Calar Alto, jointly operated by the Max-Planck-Institut für Astronomie (MPIA) at Heidelberg and the Instituto de Astrofísica de Andalucía (CSIC).
Ontogenetic changes in genetic variances of age-dependent plasticity along a latitudinal gradient.
Nilsson-Örtman, V; Rogell, B; Stoks, R; Johansson, F
2015-10-01
The expression of phenotypic plasticity may differ among life stages of the same organism. Age-dependent plasticity can be important for adaptation to heterogeneous environments, but this has only recently been recognized. Whether age-dependent plasticity is a common outcome of local adaptation and whether populations harbor genetic variation in this respect remains largely unknown. To answer these questions, we estimated levels of additive genetic variation in age-dependent plasticity in six species of damselflies sampled from 18 populations along a latitudinal gradient spanning 3600 km. We reared full sib larvae at three temperatures and estimated genetic variances in the height and slope of thermal reaction norms of body size at three points in time during ontogeny using random regression. Our data show that most populations harbor genetic variation in growth rate (reaction norm height) in all ontogenetic stages, but only some populations and ontogenetic stages were found to harbor genetic variation in thermal plasticity (reaction norm slope). Genetic variances in reaction norm height differed among species, while genetic variances in reaction norm slope differed among populations. The slope of the ontogenetic trend in genetic variances of both reaction norm height and slope increased with latitude. We propose that differences in genetic variances reflect temporal and spatial variation in the strength and direction of natural selection on growth trajectories and age-dependent plasticity. Selection on age-dependent plasticity may depend on the interaction between temperature seasonality and time constraints associated with variation in life history traits such as generation length. PMID:25649500
Cochlear Implantation in Unique Pediatric Populations
Hang, Anna X.; Kim, Grace G.; Zdanski, Carlton J.
2012-01-01
Purpose of review Over the last decade, the selection criteria for cochlear implantation have expanded to include children with special auditory, otologic, and medical problems. Included within this expanded group of candidates are those children with auditory neuropathy spectrum disorder, cochleovestibular malformations, cochlear nerve deficiency, associated syndromes, as well as multiple medical and developmental disorders. Definitive indications for cochlear implantation in these unique pediatric populations are in evolution. This review will provide an overview of managing and habilitating hearing loss within these populations with specific focus on cochlear implantation as a treatment option. Recent findings Cochlear implants have been successfully implanted in children within unique populations with variable results. Evaluation for cochlear implant candidacy includes the core components of a full medical, audiologic, and speech and language evaluations. When considering candidacy in these children, additional aspects to consider include disorder specific surgical considerations and child/care-giver counseling regarding reasonable post-implantation outcome expectations. Summary Cochlear implantations are accepted as the standard of care for improving hearing and speech development in children with severe to profound hearing loss. However, children with sensorineural hearing loss who meet established audiologic criteria for cochlear implantation may have unique audiologic, medical, and anatomic characteristics that necessitate special consideration regarding cochlear implantation candidacy and outcome. Individualized pre-operative candidacy and counseling, surgical evaluation, and reasonable post-operative outcome expectations should be taken into account in the management of these children. PMID:23128686
Matsuda, H; Iwaisaki, H
2002-01-01
In the prediction of genetic values and quantitative trait loci (QTLs) mapping via the mixed model method incorporating marker information in animal populations, it is important to model the genetic variance for individuals with an arbitrary pedigree structure. In this study, for a crossed population originated from different genetic groups such as breeds or outbred strains, the variance of additive genetic values for multiple linked QTLs that are contained in a chromosome segment, especially the segregation variance, is investigated assuming the use of marker data. The variance for a finite number of QTLs in one chromosomal segment is first examined for the crossed population with the general pedigree. Then, applying the concept of the expectation of identity-by-descent proportion, an approximation to the mean of the conditional probabilities for the linked QTLs over all loci is obtained, and using it an expression for the variance in the case of an infinite number of linked QTLs marked by flanking markers is derived. It appears that the approach presented can be useful in the segment mapping using, and in the genetic evaluation of, crosses with general pedigrees in the population of concern. The calculation of the segregation variance through the current approach is illustrated numerically, using a small data-set.
Impact of Damping Uncertainty on SEA Model Response Variance
NASA Technical Reports Server (NTRS)
Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand
2010-01-01
Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.
Monte Carlo variance reduction approaches for non-Boltzmann tallies
Booth, T.E.
1992-12-01
Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed.
The positioning algorithm based on feature variance of billet character
NASA Astrophysics Data System (ADS)
Yi, Jiansong; Hong, Hanyu; Shi, Yu; Chen, Hongyang
2015-12-01
In the process of steel billets recognition on the production line, the key problem is how to determine the position of the billet from complex scenes. To solve this problem, this paper presents a positioning algorithm based on the feature variance of billet character. Using the largest intra-cluster variance recursive method based on multilevel filtering, the billet characters are segmented completely from the complex scenes. There are three rows of characters on each steel billet, we are able to determine whether the connected regions, which satisfy the condition of the feature variance, are on a straight line. Then we can accurately locate the steel billet. The experimental results demonstrated that the proposed method in this paper is competitive to other methods in positioning the characters and it also reduce the running time. The algorithm can provide a better basis for the character recognition.
Saturation of number variance in embedded random-matrix ensembles.
Prakash, Ravi; Pandey, Akhilesh
2016-05-01
We study fluctuation properties of embedded random matrix ensembles of noninteracting particles. For ensemble of two noninteracting particle systems, we find that unlike the spectra of classical random matrices, correlation functions are nonstationary. In the locally stationary region of spectra, we study the number variance and the spacing distributions. The spacing distributions follow the Poisson statistics, which is a key behavior of uncorrelated spectra. The number variance varies linearly as in the Poisson case for short correlation lengths but a kind of regularization occurs for large correlation lengths, and the number variance approaches saturation values. These results are known in the study of integrable systems but are being demonstrated for the first time in random matrix theory. We conjecture that the interacting particle cases, which exhibit the characteristics of classical random matrices for short correlation lengths, will also show saturation effects for large correlation lengths. PMID:27300898
Enhancing area of review capabilities: Implementing a variance program
De Leon, F.
1995-12-01
The Railroad Commission of Texas (RRC) has regulated oil-field injection well operations since issuing its first injection permit in 1938. The Environmental Protection Agency (EPA) granted the RRC primary enforcement responsibility for the Class H Underground Injection Control (UIC) Program in April 1982. At that time, the added level of groundwater protection afforded by an Area of Review (AOR) on previously permitted Class H wells was not deemed necessary or cost effective. A proposed EPA rule change will require AORs to be performed on all pre-primacy Class II wells unless a variance can be justified. A variance methodology has been developed by researchers at the University of Missouri-Rolla in conjunction with the American Petroleum Institute (API). This paper will outline the RRC approach to implementing the AOR variance methodology. The RRC`s UIC program tracks 49,256 pre-primacy wells. Approximately 25,598 of these wells have active permits and will be subject to the proposed AOR requirements. The potential workload of performing AORs or granting variances for this many wells makes the development of a Geographic Information System (GIS) imperative. The RRC has recently completed a digitized map of the entire state and has spotted 890,000 of an estimated 1.2 million wells. Integrating this digital state map into a GIS will allow the RRC to tie its many data systems together. Once in place, this integrated data system will be used to evaluate AOR variances for pre-primacy wells on a field-wide basis. It will also reduce the regulatory cost of permitting by allowing the RRC staff to perform AORs or grant variances for the approximately 3,000 new and amended permit applications requiring AORs each year.
The dynamic Allan Variance IV: characterization of atomic clock anomalies.
Galleani, Lorenzo; Tavella, Patrizia
2015-05-01
The number of applications where precise clocks play a key role is steadily increasing, satellite navigation being the main example. Precise clock anomalies are hence critical events, and their characterization is a fundamental problem. When an anomaly occurs, the clock stability changes with time, and this variation can be characterized with the dynamic Allan variance (DAVAR). We obtain the DAVAR for a series of common clock anomalies, namely, a sinusoidal term, a phase jump, a frequency jump, and a sudden change in the clock noise variance. These anomalies are particularly common in space clocks. Our analytic results clarify how the clock stability changes during these anomalies.
The dynamic Allan Variance IV: characterization of atomic clock anomalies.
Galleani, Lorenzo; Tavella, Patrizia
2015-05-01
The number of applications where precise clocks play a key role is steadily increasing, satellite navigation being the main example. Precise clock anomalies are hence critical events, and their characterization is a fundamental problem. When an anomaly occurs, the clock stability changes with time, and this variation can be characterized with the dynamic Allan variance (DAVAR). We obtain the DAVAR for a series of common clock anomalies, namely, a sinusoidal term, a phase jump, a frequency jump, and a sudden change in the clock noise variance. These anomalies are particularly common in space clocks. Our analytic results clarify how the clock stability changes during these anomalies. PMID:25965674
Entropy, Fisher Information and Variance with Frost-Musulin Potenial
NASA Astrophysics Data System (ADS)
Idiodi, J. O. A.; Onate, C. A.
2016-09-01
This study presents the Shannon and Renyi information entropy for both position and momentum space and the Fisher information for the position-dependent mass Schrödinger equation with the Frost-Musulin potential. The analysis of the quantum mechanical probability has been obtained via the Fisher information. The variance information of this potential is equally computed. This controls both the chemical properties and physical properties of some of the molecular systems. We have observed the behaviour of the Shannon entropy. Renyi entropy, Fisher information and variance with the quantum number n respectively.
Age-specific patterns of genetic variance in Drosophila melanogaster. I. Mortality
Promislow, D.E.L.; Tatar, M.; Curtsinger, J.W.
1996-06-01
Peter Medawar proposed that senescence arises from an age-related decline in the force of selection, which allows late-acting deleterious mutations to accumulate. Subsequent workers have suggested that mutation accumulation could produce an age-related increase in additive genetic variance (V{sub A}) for fitness traits, as recently found in Drosophila melanogaster. Here we report results from a genetic analysis of mortality in 65,134 D. melanogaster. Additive genetic variance for female mortality rates increases from 0.007 in the first week of life to 0.325 by the third week, and then declines to 0.002 by the seventh week. Males show a similar pattern, though total variance is lower than in females. In contrast to a predicted divergence in mortality curves, mortality curves of different genotypes are roughly parallel. Using a three-parameter model, we find significant V{sub A} for the slope and constant term of the curve describing age-specific mortality rates, and also for the rate at which mortality decelerates late in life. These results fail to support a prediction derived from Medawar`s {open_quotes}mutation accumulation{close_quotes} theory for the evolution of senescence. However, our results could be consistent with alternative interpretations of evolutionary models of aging. 65 refs., 2 figs., 2 tabs.
Analysis of T-RFLP data using analysis of variance and ordination methods: a comparative study.
Culman, S W; Gauch, H G; Blackwood, C B; Thies, J E
2008-09-01
The analysis of T-RFLP data has developed considerably over the last decade, but there remains a lack of consensus about which statistical analyses offer the best means for finding trends in these data. In this study, we empirically tested and theoretically compared ten diverse T-RFLP datasets derived from soil microbial communities using the more common ordination methods in the literature: principal component analysis (PCA), nonmetric multidimensional scaling (NMS) with Sørensen, Jaccard and Euclidean distance measures, correspondence analysis (CA), detrended correspondence analysis (DCA) and a technique new to T-RFLP data analysis, the Additive Main Effects and Multiplicative Interaction (AMMI) model. Our objectives were i) to determine the distribution of variation in T-RFLP datasets using analysis of variance (ANOVA), ii) to determine the more robust and informative multivariate ordination methods for analyzing T-RFLP data, and iii) to compare the methods based on theoretical considerations. For the 10 datasets examined in this study, ANOVA revealed that the variation from Environment main effects was always small, variation from T-RFs main effects was large, and variation from T-RFxEnvironment (TxE) interactions was intermediate. Larger variation due to TxE indicated larger differences in microbial communities between environments/treatments and thus demonstrated the utility of ANOVA to provide an objective assessment of community dissimilarity. The comparison of statistical methods typically yielded similar empirical results. AMMI, T-RF-centered PCA, and DCA were the most robust methods in terms of producing ordinations that consistently reached a consensus with other methods. In datasets with high sample heterogeneity, NMS analyses with Sørensen and Jaccard distance were the most sensitive for recovery of complex gradients. The theoretical comparison showed that some methods hold distinct advantages for T-RFLP analysis, such as estimations of variation
Further results related to variance past lifetime class & associated orderings and their properties
NASA Astrophysics Data System (ADS)
Mahdy, Mervat
2016-11-01
If the random variable T denotes the lifetime of a unit, then the random variable T(t) = [ t - T ∣ T ≤ t ] , for a fixed t > 0, is known as the past lifetime. In this study, we present some new properties of the mean and variance for past lifetime classes (orderings). In addition, we consider an (n - r + 1) -out-of- n system with identical components where it is assumed that the lifetimes of the components are i.i.d. We assume that the system fails before time x, x > 0. Under these conditions, we are interested in studying the variance time elapsed since the failure of the components. Several properties of this function are studied and an example is provided. Finally, some applications in economic theory are described with real data.
The application of analysis of variance (ANOVA) to different experimental designs in optometry.
Armstrong, R A; Eperjesi, F; Gilmartin, B
2002-05-01
Analysis of variance (ANOVA) is the most efficient method available for the analysis of experimental data. Analysis of variance is a method of considerable complexity and subtlety, with many different variations, each of which applies in a particular experimental context. Hence, it is possible to apply the wrong type of ANOVA to data and, therefore, to draw an erroneous conclusion from an experiment. This article reviews the types of ANOVA most likely to arise in clinical experiments in optometry including the one-way ANOVA ('fixed' and 'random effect' models), two-way ANOVA in randomised blocks, three-way ANOVA, and factorial experimental designs (including the varieties known as 'split-plot' and 'repeated measures'). For each ANOVA, the appropriate experimental design is described, a statistical model is formulated, and the advantages and limitations of each type of design discussed. In addition, the problems of non-conformity to the statistical model and determination of the number of replications are considered.
Quantitative Genetic Analysis of Temperature Regulation in MUS MUSCULUS. I. Partitioning of Variance
Lacy, Robert C.; Lynch, Carol Becker
1979-01-01
Heritabilities (from parent-offspring regression) and intraclass correlations of full sibs for a variety of traits were estimated from 225 litters of a heterogeneous stock (HS/Ibg) of laboratory mice. Initial variance partitioning suggested different adaptive functions for physiological, morphological and behavioral adjustments with respect to their thermoregulatory significance. Metabolic heat-production mechanisms appear to have reached their genetic limits, with little additive genetic variance remaining. This study provided no genetic evidence that body size has a close directional association with fitness in cold environments, since heritability estimates for weight gain and adult weight were similar and high, whether or not the animals were exposed to cold. Behavioral heat conservation mechanisms also displayed considerable amounts of genetic variability. However, due to strong evidence from numerous other studies that behavior serves an important adaptive role for temperature regulation in small mammals, we suggest that fluctuating selection pressures may have acted to maintain heritable variation in these traits. PMID:17248909
Morrill, Mandy; Bachman, Curt
2013-05-01
Research on family violence has overwhelmingly focused on a patriarchal model, which inaccurately depicts men as exclusively perpetrators and women as exclusively victims of abusive family acts. In addition, empirical research on sibling abuse in families has been significantly absent from the professional literature. This exploratory study used a survey instrument based on an altered version of the Conflict Tactics Scale (CTS) to investigate the question of whether significant gender differences exist in the experience of sibling abuse as a child, either as perpetrator or victim. MANOVAs (multivariate analyses of variance) indicate that there are no gender differences related to surviving sibling abuse or perpetrating emotional and physical abuse, whereas it was found that women had a significantly higher rate of perpetration related to sibling sexual abuse. Specific results related to the gender variance of perpetrators are explored. Limitations as well as implications of these findings on treatment, counselor education, and future research are discussed.
dos Reis, Matheus Costa; Pádua, José Maria Villela; Abreu, Guilherme Barbosa; Guedes, Fernando Lisboa; Balbi, Rodrigo Vieira; de Souza, João Cândido
2014-01-01
This study was carried out to obtain the estimates of genetic variance and covariance components related to intra- and interpopulation in the original populations (C0) and in the third cycle (C3) of reciprocal recurrent selection (RRS) which allows breeders to define the best breeding strategy. For that purpose, the half-sib progenies of intrapopulation (P11 and P22) and interpopulation (P12 and P21) from populations 1 and 2 derived from single-cross hybrids in the 0 and 3 cycles of the reciprocal recurrent selection program were used. The intra- and interpopulation progenies were evaluated in a 10 × 10 triple lattice design in two separate locations. The data for unhusked ear weight (ear weight without husk) and plant height were collected. All genetic variance and covariance components were estimated from the expected mean squares. The breakdown of additive variance into intrapopulation and interpopulation additive deviations (στ2) and the covariance between these and their intrapopulation additive effects (CovAτ) found predominance of the dominance effect for unhusked ear weight. Plant height for these components shows that the intrapopulation additive effect explains most of the variation. Estimates for intrapopulation and interpopulation additive genetic variances confirm that populations derived from single-cross hybrids have potential for recurrent selection programs. PMID:25009831
ERIC Educational Resources Information Center
Starns, Jeffrey J.; Rotello, Caren M.; Hautus, Michael J.
2014-01-01
We tested the dual process and unequal variance signal detection models by jointly modeling recognition and source confidence ratings. The 2 approaches make unique predictions for the slope of the recognition memory zROC function for items with correct versus incorrect source decisions. The standard bivariate Gaussian version of the unequal…
The unique biochemistry of methanogenesis.
Deppenmeier, Uwe
2002-01-01
Methanogenic archaea have an unusual type of metabolism because they use H2 + CO2, formate, methylated C1 compounds, or acetate as energy and carbon sources for growth. The methanogens produce methane as the major end product of their metabolism in a unique energy-generating process. The organisms received much attention because they catalyze the terminal step in the anaerobic breakdown of organic matter under sulfate-limiting conditions and are essential for both the recycling of carbon compounds and the maintenance of the global carbon flux on Earth. Furthermore, methane is an important greenhouse gas that directly contributes to climate changes and global warming. Hence, the understanding of the biochemical processes leading to methane formation are of major interest. This review focuses on the metabolic pathways of methanogenesis that are rather unique and involve a number of unusual enzymes and coenzymes. It will be shown how the previously mentioned substrates are converted to CH4 via the CO2-reducing, methylotrophic, or aceticlastic pathway. All catabolic processes finally lead to the formation of a mixed disulfide from coenzyme M and coenzyme B that functions as an electron acceptor of certain anaerobic respiratory chains. Molecular hydrogen, reduced coenzyme F420, or reduced ferredoxin are used as electron donors. The redox reactions as catalyzed by the membrane-bound electron transport chains are coupled to proton translocation across the cytoplasmic membrane. The resulting electrochemical proton gradient is the driving force for ATP synthesis as catalyzed by an A1A0-type ATP synthase. Other energy-transducing enzymes involved in methanogenesis are the membrane-integral methyltransferase and the formylmethanofuran dehydrogenase complex. The former enzyme is a unique, reversible sodium ion pump that couples methyl-group transfer with the transport of Na+ across the membrane. The formylmethanofuran dehydrogenase is a reversible ion pump that catalyzes
40 CFR 124.64 - Appeals of variances.
Code of Federal Regulations, 2014 CFR
2014-07-01
... anticipated to pose an unacceptable risk to human health or the environment because of bioaccumulation... 40 Protection of Environment 22 2014-07-01 2013-07-01 true Appeals of variances. 124.64 Section 124.64 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS...
40 CFR 124.64 - Appeals of variances.
Code of Federal Regulations, 2013 CFR
2013-07-01
... anticipated to pose an unacceptable risk to human health or the environment because of bioaccumulation... 40 Protection of Environment 23 2013-07-01 2013-07-01 false Appeals of variances. 124.64 Section 124.64 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS...
40 CFR 124.64 - Appeals of variances.
Code of Federal Regulations, 2012 CFR
2012-07-01
... anticipated to pose an unacceptable risk to human health or the environment because of bioaccumulation... 40 Protection of Environment 23 2012-07-01 2012-07-01 false Appeals of variances. 124.64 Section 124.64 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS...
The Variance of Intraclass Correlations in Three and Four Level
ERIC Educational Resources Information Center
Hedges, Larry V.; Hedberg, Eric C.; Kuyper, Arend M.
2012-01-01
Intraclass correlations are used to summarize the variance decomposition in popula- tions with multilevel hierarchical structure. There has recently been considerable interest in estimating intraclass correlations from surveys or designed experiments to provide design parameters for planning future large-scale randomized experiments. The large…
Infinite variance in fermion quantum Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.
40 CFR 124.62 - Decision on variances.
Code of Federal Regulations, 2010 CFR
2010-07-01
... section 301(i) based on delay in completion of a publicly owned treatment works; (2) After consultation... technology; or (3) Variances under CWA section 316(a) for thermal pollution. (b) The State Director may deny... 124.62 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS...
44 CFR 60.6 - Variances and exceptions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... repair or rehabilitation of historic structures upon a determination that the proposed repair or rehabilitation will not preclude the structure's continued designation as a historic structure and the variance is the minimum necessary to preserve the historic character and design of the structure....
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions. (a... or pathogenic contamination, a treatment lapse or deficiency, or a problem in the operation...
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions. (a... or pathogenic contamination, a treatment lapse or deficiency, or a problem in the operation...
Variance-based uncertainty relations for incompatible observables
NASA Astrophysics Data System (ADS)
Chen, Bin; Cao, Ning-Ping; Fei, Shao-Ming; Long, Gui-Lu
2016-09-01
We formulate uncertainty relations for arbitrary finite number of incompatible observables. Based on the sum of variances of the observables, both Heisenberg-type and Schrödinger-type uncertainty relations are provided. These new lower bounds are stronger in most of the cases than the ones derived from some existing inequalities. Detailed examples are presented.
Explaining Common Variance Shared by Early Numeracy and Literacy
ERIC Educational Resources Information Center
Davidse, N. J.; De Jong, M. T.; Bus, A. G.
2014-01-01
How can it be explained that early literacy and numeracy share variance? We specifically tested whether the correlation between four early literacy skills (rhyming, letter knowledge, emergent writing, and orthographic knowledge) and simple sums (non-symbolic and story condition) reduced after taking into account preschool attention control,…
Dominance, Information, and Hierarchical Scaling of Variance Space.
ERIC Educational Resources Information Center
Ceurvorst, Robert W.; Krus, David J.
1979-01-01
A method for computation of dominance relations and for construction of their corresponding hierarchical structures is presented. The link between dominance and variance allows integration of the mathematical theory of information with least squares statistical procedures without recourse to logarithmic transformations of the data. (Author/CTM)
Variances of Plane Parameters Fitted to Range Data.
Franaszek, Marek
2010-01-01
Formulas for variances of plane parameters fitted with Nonlinear Least Squares to point clouds acquired by 3D imaging systems (e.g., LADAR) are derived. Two different error objective functions used in minimization are discussed: the orthogonal and the directional functions. Comparisons of corresponding formulas suggest the two functions can yield different results when applied to the same dataset.
Comparison of Turbulent Thermal Diffusivity and Scalar Variance Models
NASA Technical Reports Server (NTRS)
Yoder, Dennis A.
2016-01-01
In this study, several variable turbulent Prandtl number formulations are examined for boundary layers, pipe flow, and axisymmetric jets. The model formulations include simple algebraic relations between the thermal diffusivity and turbulent viscosity as well as more complex models that solve transport equations for the thermal variance and its dissipation rate. Results are compared with available data for wall heat transfer and profile measurements of mean temperature, the root-mean-square (RMS) fluctuating temperature, turbulent heat flux and turbulent Prandtl number. For wall-bounded problems, the algebraic models are found to best predict the rise in turbulent Prandtl number near the wall as well as the log-layer temperature profile, while the thermal variance models provide a good representation of the RMS temperature fluctuations. In jet flows, the algebraic models provide no benefit over a constant turbulent Prandtl number approach. Application of the thermal variance models finds that some significantly overpredict the temperature variance in the plume and most underpredict the thermal growth rate of the jet. The models yield very similar fluctuating temperature intensities in jets from straight pipes and smooth contraction nozzles, in contrast to data that indicate the latter should have noticeably higher values. For the particular low subsonic heated jet cases examined, changes in the turbulent Prandtl number had no effect on the centerline velocity decay.
21 CFR 821.2 - Exemptions and variances.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Exemptions and variances. 821.2 Section 821.2 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL... unnecessary; (3) A complete description of alternative steps that are available, or that the petitioner...
21 CFR 898.14 - Exemptions and variances.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Exemptions and variances. 898.14 Section 898.14 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... performance standard is unnecessary or unfeasible; (3) A complete description of alternative steps that...
Intuitive Analysis of Variance-- A Formative Assessment Approach
ERIC Educational Resources Information Center
Trumpower, David
2013-01-01
This article describes an assessment activity that can show students how much they intuitively understand about statistics, but also alert them to common misunderstandings. How the activity can be used formatively to help improve students' conceptual understanding of analysis of variance is discussed. (Contains 1 figure and 1 table.)
GalaxyCount: Galaxy counts and variance calculator
NASA Astrophysics Data System (ADS)
Bland-Hawthorn, Joss; Ellis, Simon
2013-12-01
GalaxyCount calculates the number and standard deviation of galaxies in a magnitude limited observation of a given area. The methods to calculate both the number and standard deviation may be selected from different options. Variances may be computed for circular, elliptical and rectangular window functions.
Analysis of Variance: What Is Your Statistical Software Actually Doing?
ERIC Educational Resources Information Center
Li, Jian; Lomax, Richard G.
2011-01-01
Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…
A Visual Model for the Variance and Standard Deviation
ERIC Educational Resources Information Center
Orris, J. B.
2011-01-01
This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.
Variance in Math Achievement Attributable to Visual Cognitive Constructs
ERIC Educational Resources Information Center
Oehlert, Jeremy J.
2012-01-01
Previous research has reported positive correlations between math achievement and the cognitive constructs of spatial visualization, working memory, and general intelligence; however, no single study has assessed variance in math achievement attributable to all three constructs, examined in combination. The current study fills this gap in the…
Caution on the Use of Variance Ratios: A Comment.
ERIC Educational Resources Information Center
Shaffer, Juliet Popper
1992-01-01
Several metanalytic studies of group variability use variance ratios as measures of effect size. Problems with this approach are discussed, including limitations of using means and medians of ratios. Mean logarithms and the geometric mean are not adversely affected by the arbitrary choice of numerator. (SLD)
Some Computer Programs for Selected Problems in Analysis of Variance.
ERIC Educational Resources Information Center
Edwards, Lynne K.; Bland, Patricia C.
Selected examples using the statistical packages Statistical Package for the Social Sciences (SPSS), the Statistical Analysis System (SAS), and BMDP are presented to facilitate their use and encourage appropriate uses in: (1) a hierarchical design; (2) a confounded factorial design; and (3) variance component estimation procedures. To illustrate…
Module organization and variance in protein-protein interaction networks
NASA Astrophysics Data System (ADS)
Lin, Chun-Yu; Lee, Tsai-Ling; Chiu, Yi-Yuan; Lin, Yi-Wei; Lo, Yu-Shu; Lin, Chih-Ta; Yang, Jinn-Moon
2015-03-01
A module is a group of closely related proteins that act in concert to perform specific biological functions through protein-protein interactions (PPIs) that occur in time and space. However, the underlying module organization and variance remain unclear. In this study, we collected module templates to infer respective module families, including 58,041 homologous modules in 1,678 species, and PPI families using searches of complete genomic database. We then derived PPI evolution scores and interface evolution scores to describe the module elements, including core and ring components. Functions of core components were highly correlated with those of essential genes. In comparison with ring components, core proteins/PPIs were conserved across multiple species. Subsequently, protein/module variance of PPI networks confirmed that core components form dynamic network hubs and play key roles in various biological functions. Based on the analyses of gene essentiality, module variance, and gene co-expression, we summarize the observations of module organization and variance as follows: 1) a module consists of core and ring components; 2) core components perform major biological functions and collaborate with ring components to execute certain functions in some cases; 3) core components are more conserved and essential during organizational changes in different biological states or conditions.
40 CFR 52.1390 - Missoula variance provision.
Code of Federal Regulations, 2012 CFR
2012-07-01
... by the Montana Board of Health and Environmental Sciences on June 28, 1991 and submitted by the... 40 Protection of Environment 4 2012-07-01 2012-07-01 false Missoula variance provision. 52.1390 Section 52.1390 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR...
40 CFR 52.1390 - Missoula variance provision.
Code of Federal Regulations, 2013 CFR
2013-07-01
... by the Montana Board of Health and Environmental Sciences on June 28, 1991 and submitted by the... 40 Protection of Environment 4 2013-07-01 2013-07-01 false Missoula variance provision. 52.1390 Section 52.1390 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR...
40 CFR 52.1390 - Missoula variance provision.
Code of Federal Regulations, 2010 CFR
2010-07-01
... by the Montana Board of Health and Environmental Sciences on June 28, 1991 and submitted by the... 40 Protection of Environment 4 2010-07-01 2010-07-01 false Missoula variance provision. 52.1390 Section 52.1390 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR...
40 CFR 52.1390 - Missoula variance provision.
Code of Federal Regulations, 2014 CFR
2014-07-01
... by the Montana Board of Health and Environmental Sciences on June 28, 1991 and submitted by the... 40 Protection of Environment 4 2014-07-01 2014-07-01 false Missoula variance provision. 52.1390 Section 52.1390 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR...
40 CFR 52.1390 - Missoula variance provision.
Code of Federal Regulations, 2011 CFR
2011-07-01
... by the Montana Board of Health and Environmental Sciences on June 28, 1991 and submitted by the... 40 Protection of Environment 4 2011-07-01 2011-07-01 false Missoula variance provision. 52.1390 Section 52.1390 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR...
40 CFR 142.43 - Disposition of a variance request.
Code of Federal Regulations, 2010 CFR
2010-07-01
... specified in § 142.40(a) such notice shall provide that the variance will be terminated when the system... Administrator that the system has failed to comply with any requirements of a final schedule issued pursuant to... health of persons or upon a finding that the public water system has failed to comply with monitoring...
40 CFR 142.43 - Disposition of a variance request.
Code of Federal Regulations, 2011 CFR
2011-07-01
... specified in § 142.40(a) such notice shall provide that the variance will be terminated when the system... Administrator that the system has failed to comply with any requirements of a final schedule issued pursuant to... health of persons or upon a finding that the public water system has failed to comply with monitoring...
Unbiased Estimates of Variance Components with Bootstrap Procedures
ERIC Educational Resources Information Center
Brennan, Robert L.
2007-01-01
This article provides general procedures for obtaining unbiased estimates of variance components for any random-model balanced design under any bootstrap sampling plan, with the focus on designs of the type typically used in generalizability theory. The results reported here are particularly helpful when the bootstrap is used to estimate standard…
Genetic Variance in the SES-IQ Correlation.
ERIC Educational Resources Information Center
Eckland, Bruce K.
1979-01-01
Discusses questions dealing with genetic aspects of the correlation between IQ and socioeconomic status (SES). Questions include: How does assortative mating affect the genetic variance of IQ? Is the relationship between an individual's IQ and adult SES a causal one? And how can IQ research improve schools and schooling? (Author/DB)
Infinite variance in fermion quantum Monte Carlo calculations.
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling. PMID:27078480
Variance Estimation of Imputed Survey Data. Working Paper Series.
ERIC Educational Resources Information Center
Zhang, Fan; Brick, Mike; Kaufman, Steven; Walter, Elizabeth
Missing data is a common problem in virtually all surveys. This study focuses on variance estimation and its consequences for analysis of survey data from the National Center for Education Statistics (NCES). Methods suggested by C. Sarndal (1992), S. Kaufman (1996), and S. Shao and R. Sitter (1996) are reviewed in detail. In section 3, the…
Exploratory Multivariate Analysis of Variance: Contrasts and Variables.
ERIC Educational Resources Information Center
Barcikowski, Robert S.; Elliott, Ronald S.
The contribution of individual variables to overall multivariate significance in a multivariate analysis of variance (MANOVA) is investigated using a combination of canonical discriminant analysis and Roy-Bose simultaneous confidence intervals. Difficulties with this procedure are discussed, and its advantages are illustrated using examples based…
[ECoG classification based on wavelet variance].
Yan, Shiyu; Liu, Chong; Wang, Hong; Zhao, Haibin
2013-06-01
For a typical electrocorticogram (ECoG)-based brain-computer interface (BCI) system in which the subject's task is to imagine movements of either the left small finger or the tongue, we proposed a feature extraction algorithm using wavelet variance. Firstly the definition and significance of wavelet variance were brought out and taken as feature based on the discussion of wavelet transform. Six channels with most distinctive features were selected from 64 channels for analysis. Consequently the EEG data were decomposed using db4 wavelet. The wavelet coeffi-cient variances containing Mu rhythm and Beta rhythm were taken out as features based on ERD/ERS phenomenon. The features were classified linearly with an algorithm of cross validation. The results of off-line analysis showed that high classification accuracies of 90. 24% and 93. 77% for training and test data set were achieved, the wavelet vari-ance had characteristics of simplicity and effectiveness and it was suitable for feature extraction in BCI research. K PMID:23865300
Unique features of space reactors
Buden, D.
1990-01-01
Space reactors are designed to meet a unique set of requirements; they must be sufficiently compact to be launched in a rocket to their operational location, operate for many years without maintenance and servicing, operate in extreme environments, and reject heat by radiation to space. To meet these restrictions, operating temperatures are much greater than in terrestrial power plants, and the reactors tend to have a fast neutron spectrum. Currently, a new generation of space reactor power plants is being developed. The major effort is in the SP-100 program, where the power plant is being designed for seven years of full power, and no maintenance operation at a reactor outlet operating temperature of 1350 K. 8 refs., 3 figs., 1 tab.
The Probabilities of Unique Events
Khemlani, Sangeet S.; Lotstein, Max; Johnson-Laird, Phil
2012-01-01
Many theorists argue that the probabilities of unique events, even real possibilities such as President Obama's re-election, are meaningless. As a consequence, psychologists have seldom investigated them. We propose a new theory (implemented in a computer program) in which such estimates depend on an intuitive non-numerical system capable only of simple procedures, and a deliberative system that maps intuitions into numbers. The theory predicts that estimates of the probabilities of conjunctions should often tend to split the difference between the probabilities of the two conjuncts. We report two experiments showing that individuals commit such violations of the probability calculus, and corroborating other predictions of the theory, e.g., individuals err in the same way even when they make non-numerical verbal estimates, such as that an event is highly improbable. PMID:23056224
Split liver transplantation: What's unique?
Dalal, Aparna R
2015-09-24
The intraoperative management of split liver transplantation (SLT) has some unique features as compared to routine whole liver transplantations. Only the liver has this special ability to regenerate that confers benefits in survival and quality of life for two instead of one by splitting livers. Primary graft dysfunction may result from small for size syndrome. Graft weight to recipient body weight ratio is significant for both trisegmental and hemiliver grafts. Intraoperative surgical techniques aim to reduce portal hyperperfusion and decrease venous portal pressure. Ischemic preconditioning can be instituted to protect against ischemic reperfusion injury which impacts graft regeneration. Advancement of the technique of SLT is essential as use of split cadaveric grafts expands the donor pool and potentially has an excellent future. PMID:26421261
Variance in the reproductive success of dominant male mountain gorillas.
Robbins, Andrew M; Gray, Maryke; Uwingeli, Prosper; Mburanumwe, Innocent; Kagoda, Edwin; Robbins, Martha M
2014-10-01
Using 30 years of demographic data from 15 groups, this study estimates how harem size, female fertility, and offspring survival may contribute to variance in the siring rates of dominant male mountain gorillas throughout the Virunga Volcano Region. As predicted for polygynous species, differences in harem size were the greatest source of variance in the siring rate, whereas differences in female fertility and offspring survival were relatively minor. Harem size was positively correlated with offspring survival, even after removing all known and suspected cases of infanticide, so the correlation does not seem to reflect differences in the ability of males to protect their offspring. Harem size was not significantly correlated with female fertility, which is consistent with the hypothesis that mountain gorillas have minimal feeding competition. Harem size, offspring survival, and siring rates were not significantly correlated with the proportion of dominant tenures that occurred in multimale groups versus one-male groups; even though infanticide is less likely when those tenures end in multimale groups than one-male groups. In contrast with the relatively small contribution of offspring survival to variance in the siring rates of this study, offspring survival is a major source of variance in the male reproductive success of western gorillas, which have greater predation risks and significantly higher rates of infanticide. If differences in offspring protection are less important among male mountain gorillas than western gorillas, then the relative importance of other factors may be greater for mountain gorillas. Thus, our study illustrates how variance in male reproductive success and its components can differ between closely related species.
Gravity Wave Variances and Propagation Derived from AIRS Radiances
NASA Technical Reports Server (NTRS)
Gong, Jie; Wu, Dong L.; Eckermann, S. D.
2012-01-01
As the first gravity wave (GW) climatology study using nadir-viewing infrared sounders, 50 Atmospheric Infrared Sounder (AIRS) radiance channels are selected to estimate GW variances at pressure levels between 2-100 hPa. The GW variance for each scan in the cross-track direction is derived from radiance perturbations in the scan, independently of adjacent scans along the orbit. Since the scanning swaths are perpendicular to the satellite orbits, which are inclined meridionally at most latitudes, the zonal component of GW propagation can be inferred by differencing the variances derived between the westmost and the eastmost viewing angles. Consistent with previous GW studies using various satellite instruments, monthly mean AIRS variance shows large enhancements over meridionally oriented mountain ranges as well as some islands at winter hemisphere high latitudes. Enhanced wave activities are also found above tropical deep convective regions. GWs prefer to propagate westward above mountain ranges, and eastward above deep convection. AIRS 90 field-of-views (FOVs), ranging from +48 deg. to -48 deg. off nadir, can detect large-amplitude GWs with a phase velocity propagating preferentially at steep angles (e.g., those from orographic and convective sources). The annual cycle dominates the GW variances and the preferred propagation directions for all latitudes. Indication of a weak two-year variation in the tropics is found, which is presumably related to the Quasi-biennial oscillation (QBO). AIRS geometry makes its out-tracks capable of detecting GWs with vertical wavelengths substantially shorter than the thickness of instrument weighting functions. The novel discovery of AIRS capability of observing shallow inertia GWs will expand the potential of satellite GW remote sensing and provide further constraints on the GW drag parameterization schemes in the general circulation models (GCMs).
Individualized additional instruction for calculus
NASA Astrophysics Data System (ADS)
Takata, Ken
2010-10-01
College students enrolling in the calculus sequence have a wide variance in their preparation and abilities, yet they are usually taught from the same lecture. We describe another pedagogical model of Individualized Additional Instruction (IAI) that assesses each student frequently and prescribes further instruction and homework based on the student's performance. Our study compares two calculus classes, one taught with mandatory remedial IAI and the other without. The class with mandatory remedial IAI did significantly better on comprehensive multiple-choice exams, participated more frequently in classroom discussion and showed greater interest in theorem-proving and other advanced topics.
Shenk, T.M.; White, Gary C.; Burnham, K.P.
1998-01-01
Monte Carlo simulations were conducted to evaluate robustness of four tests to detect density dependence, from series of population abundances, to the addition of sampling variance. Population abundances were generated from random walk, stochastic exponential growth, and density-dependent population models. Population abundance estimates were generated with sampling variances distributed as lognormal and constant coefficients of variation (cv) from 0.00 to 1.00. In general, when data were generated under a random walk, Type I error rates increased rapidly for Bulmer's R, Pollard et al.'s, and Dennis and Taper's tests with increasing magnitude of sampling variance for n > 5 yr and all values of process variation. Bulmer's R* test maintained a constant 5% Type I error rate for n > 5 yr and all magnitudes of sampling variance in the population abundance estimates. When abundances were generated from two stochastic exponential growth models (R = 0.05 and R = 0.10), Type I errors again increased with increasing sampling variance; magnitude of Type I error rates were higher for the slower growing population. Therefore, sampling error inflated Type I error rates, invalidating the tests, for all except Bulmer's R* test. Comparable simulations for abundance estimates generated from a density-dependent growth rate model were conducted to estimate power of the tests. Type II error rates were influenced by the relationship of initial population size to carrying capacity (K), length of time series, as well as sampling error. Given the inflated Type I error rates for all but Bulmer, s R*, power was overestimated for the remaining tests, resulting in density: dependence being detected more often than it existed. Population abundances of natural populations are almost exclusively estimated rather than censused, assuring sampling error. Therefore, because these tests have been shown to be either invalid when only sampling variance occurs in the population abundances (Bulmer's R
Cernicchiaro, N; Renter, D G; Xiang, S; White, B J; Bello, N M
2013-06-01
Variability in ADG of feedlot cattle can affect profits, thus making overall returns more unstable. Hence, knowledge of the factors that contribute to heterogeneity of variances in animal performance can help feedlot managers evaluate risks and minimize profit volatility when making managerial and economic decisions in commercial feedlots. The objectives of the present study were to evaluate heteroskedasticity, defined as heterogeneity of variances, in ADG of cohorts of commercial feedlot cattle, and to identify cattle demographic factors at feedlot arrival as potential sources of variance heterogeneity, accounting for cohort- and feedlot-level information in the data structure. An operational dataset compiled from 24,050 cohorts from 25 U. S. commercial feedlots in 2005 and 2006 was used for this study. Inference was based on a hierarchical Bayesian model implemented with Markov chain Monte Carlo, whereby cohorts were modeled at the residual level and feedlot-year clusters were modeled as random effects. Forward model selection based on deviance information criteria was used to screen potentially important explanatory variables for heteroskedasticity at cohort- and feedlot-year levels. The Bayesian modeling framework was preferred as it naturally accommodates the inherently hierarchical structure of feedlot data whereby cohorts are nested within feedlot-year clusters. Evidence for heterogeneity of variance components of ADG was substantial and primarily concentrated at the cohort level. Feedlot-year specific effects were, by far, the greatest contributors to ADG heteroskedasticity among cohorts, with an estimated ∼12-fold change in dispersion between most and least extreme feedlot-year clusters. In addition, identifiable demographic factors associated with greater heterogeneity of cohort-level variance included smaller cohort sizes, fewer days on feed, and greater arrival BW, as well as feedlot arrival during summer months. These results support that
Efficiency control in large-scale genotyping using analysis of variance.
Spijker, Geert T; Bruinenberg, Marcel; te Meerman, Gerard J
2005-01-01
The efficiency of the genotyping process is determined by many simultaneous factors. In actual genotyping, a production run is often preceded by small-scale experiments to find optimal conditions. We propose to use statistical analysis of production run data as well, to gain insight into factors important for the outcome of genotyping. As an example, we show that analysis of variance (ANOVA) applied to the first-pass results of a genetic study reveals important determinants of genotyping success. The largest factor limiting genotyping appeared to be interindividual variation among DNA samples, explaining 20% of the variance, and a smaller reaction volume, sizing failure, and differences among markers all explained approximately 10%. Other potentially important factors, such as sample position within the plate and reusing electrophoresis matrix, appeared to be of minor influence. About 55% of the total variance could be explained by systematic factors. These results show that ANOVA can provide valuable feedback to improve genotyping efficiency. We propose to adjust genotype production runs using principles of experimental design in order to maximize genotyping efficiency at little additional cost.
The Efficiency of Split Panel Designs in an Analysis of Variance Model.
Liu, Xin; Wang, Wei-Guo; Liu, Hai-Jun
2016-01-01
We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm's efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447
A Simple, General Result for the Variance of Substitution Number in Molecular Evolution
Houchmandzadeh, Bahram; Vallade, Marcel
2016-01-01
The number of substitutions (of nucleotides, amino acids, etc.) that take place during the evolution of a sequence is a stochastic variable of fundamental importance in the field of molecular evolution. Although the mean number of substitutions during molecular evolution of a sequence can be estimated for a given substitution model, no simple solution exists for the variance of this random variable. We show in this article that the computation of the variance is as simple as that of the mean number of substitutions for both short and long times. Apart from its fundamental importance, this result can be used to investigate the dispersion index R, that is, the ratio of the variance to the mean substitution number, which is of prime importance in the neutral theory of molecular evolution. By investigating large classes of substitution models, we demonstrate that although R≥1, to obtain R significantly larger than unity necessitates in general additional hypotheses on the structure of the substitution model. PMID:27189545
Non-negative least-squares variance component estimation with application to GPS time series
NASA Astrophysics Data System (ADS)
Amiri-Simkooei, A. R.
2016-05-01
The problem of negative variance components is probable to occur in many geodetic applications. This problem can be avoided if non-negativity constraints on variance components (VCs) are introduced to the stochastic model. Based on the standard non-negative least-squares (NNLS) theory, this contribution presents the method of non-negative least-squares variance component estimation (NNLS-VCE). The method is easy to understand, simple to implement, and efficient in practice. The NNLS-VCE is then applied to the coordinate time series of the permanent GPS stations to simultaneously estimate the amplitudes of different noise components such as white noise, flicker noise, and random walk noise. If a noise model is unlikely to be present, its amplitude is automatically estimated to be zero. The results obtained from 350 GPS permanent stations indicate that the noise characteristics of the GPS time series are well described by combination of white noise and flicker noise. This indicates that all time series contain positive noise amplitudes for white and flicker noise. In addition, around two-thirds of the series consist of random walk noise, of which its average amplitude is the (small) value of 0.16, 0.13, and 0.45 { mm/year }^{1/2} for the north, east, and up components, respectively. Also, about half of the positive estimated amplitudes of random walk noise are statistically significant, indicating that one-third of the total time series have significant random walk noise.
Short communication: variance estimates among herds stratified by individual herd heritability.
Dechow, C D; Norman, H D; Pelensky, C A
2008-04-01
The objectives of this study were to compare (co)variance parameter estimates among subsets of data that were pooled from herds with high, medium, or low individual herd heritability estimates and to compare individual herd heritability estimates to REML heritability estimates for pooled data sets. A regression model was applied to milk yield, fat yield, protein yield, and somatic cell score (SCS) records from 20,902 herds to generate individual-herd heritability estimates. Herds representing the 5th percentile or less (P5), 47th through the 53rd percentile (P50), and the 95th percentile or higher (P95) for herd heritability were randomly selected. Yield or SCS from the selected herds were pooled for each percentile group and treated as separate traits. Records from P5, P50, and P95 were then analyzed with a 3-trait animal model. Heritability estimates were 23, 31, 26, and 8% higher in P95 than in P5 for milk yield, fat yield, protein yield, and SCS, respectively. The regression techniques successfully stratified individual herds by heritability, and additive genetic variance increased progressively, whereas permanent environmental variance decreased progressively as herd heritability increased.
The Efficiency of Split Panel Designs in an Analysis of Variance Model.
Liu, Xin; Wang, Wei-Guo; Liu, Hai-Jun
2016-01-01
We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm's efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution.
On the measurement of frequency and of its sample variance with high-resolution counters
Rubiola, Enrico
2005-05-15
A frequency counter measures the input frequency {nu} averaged over a suitable time {tau}, versus the reference clock. High resolution is achieved by interpolating the clock signal. Further increased resolution is obtained by averaging multiple frequency measurements highly overlapped. In the presence of additive white noise or white phase noise, the square uncertainty improves from {sigma}{sub {nu}}{sup 2}{proportional_to}1/{tau}{sup 2} to {sigma}{sub {nu}}{sup 2}{proportional_to}1/{tau}{sup 3}. Surprisingly, when a file of contiguous data is fed into the formula of the two-sample (Allan) variance {sigma}{sub y}{sup 2}({tau})=E{l_brace}(1/2)(y{sub k+1}-y{sub k}){sup 2}{r_brace} of the fractional frequency fluctuation y, the result is the modified Allan variance mod {sigma}{sub y}{sup 2}({tau}). But if a sufficient number of contiguous measures are averaged in order to get a longer {tau} and the data are fed into the same formula, the results is the (nonmodified) Allan variance. Of course interpretation mistakes are around the corner if the counter internal process is not well understood. The typical domain of interest is the the short-term stability measurement of oscillators.
Estimating synaptic parameters from mean, variance, and covariance in trains of synaptic responses.
Scheuss, V; Neher, E
2001-01-01
Fluctuation analysis of synaptic transmission using the variance-mean approach has been restricted in the past to steady-state responses. Here we extend this method to short repetitive trains of synaptic responses, during which the response amplitudes are not stationary. We consider intervals between trains, long enough so that the system is in the same average state at the beginning of each train. This allows analysis of ensemble means and variances for each response in a train separately. Thus, modifications in synaptic efficacy during short-term plasticity can be attributed to changes in synaptic parameters. In addition, we provide practical guidelines for the analysis of the covariance between successive responses in trains. Explicit algorithms to estimate synaptic parameters are derived and tested by Monte Carlo simulations on the basis of a binomial model of synaptic transmission, allowing for quantal variability, heterogeneity in the release probability, and postsynaptic receptor saturation and desensitization. We find that the combined analysis of variance and covariance is advantageous in yielding an estimate for the number of release sites, which is independent of heterogeneity in the release probability under certain conditions. Furthermore, it allows one to calculate the apparent quantal size for each response in a sequence of stimuli. PMID:11566771
ADAPTIVE INCREASE IN FORCE VARIANCE DURING FATIGUE IN TASKS WITH LOW REDUNDANCY
Singh, Tarkeshwar; Varadhan, SKM; Zatsiorsky, Vladimir M.; Latash, Mark L.
2010-01-01
We tested a hypothesis that fatigue of an element (a finger) leads to an adaptive neural strategy that involves an increase in force variability in the other finger(s) and an increase in co-variation of commands to fingers to keep total force variability relatively unchanged. We tested this hypothesis using a system with small redundancy (two fingers) and a marginally redundant system (with an additional constraint related to the total moment of force produced by the fingers). The subjects performed isometric accurate rhythmic force production tasks by the index (I) finger and two fingers (I and middle, M) pressing together before and after a fatiguing exercise by the I finger. Fatigue led to a large increase in force variance in the I-finger task and a smaller increase in the IM-task. We quantified two components of variance in the space of hypothetical commands to fingers, finger modes. Under both stable and unstable conditions, there was a large increase in the variance component that did not affect total force and a much smaller increase in the component that did. This resulted in an increase in an index of the force-stabilizing synergy. These results indicate that marginal redundancy is sufficient to allow the central nervous system to use adaptive increase in variability to shield important variables from effects of fatigue. We offer an interpretation of these results based on a recent development of the equilibrium-point hypothesis known as the referent configuration hypothesis. PMID:20849913
The Efficiency of Split Panel Designs in an Analysis of Variance Model
Wang, Wei-Guo; Liu, Hai-Jun
2016-01-01
We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447
NASA Astrophysics Data System (ADS)
Bielewicz, P.; Wandelt, B. D.; Banday, A. J.
2013-02-01
We present a method for the computation of the variance of cosmic microwave background (CMB) temperature maps on azimuthally symmetric patches using a fast convolution approach. As an example of the application of the method, we show results for the search for concentric rings with unusual variance in the 7-year Wilkinson Microwave Anisotropy Probe (WMAP) data. We re-analyse claims concerning the unusual variance profile of rings centred at two locations on the sky that have recently drawn special attention in the context of the conformal cyclic cosmology scenario proposed by Penrose. We extend this analysis to rings with larger radii and centred on other points of the sky. Using the fast convolution technique enables us to perform this search with higher resolution and a wider range of radii than in previous studies. We show that for one of the two special points rings with radii larger than 10° have systematically lower variance in comparison to the concordance Λ cold dark matter model predictions. However, we show that this deviation is caused by the multipoles up to order ℓ = 7. Therefore, the deficit of power for concentric rings with larger radii is yet another manifestation of the well-known anomalous CMB distribution on large angular scales. Furthermore, low-variance rings can be easily found centred on other points in the sky. In addition, we show also the results of a search for extremely high-variance rings. As for the low-variance rings, some anomalies seem to be related to the anomalous distribution of the low-order multipoles of the WMAP CMB maps. As such our results are not consistent with the conformal cyclic cosmology scenario.
Astronomy Outreach for Large and Unique Audiences
NASA Astrophysics Data System (ADS)
Lubowich, D.; Sparks, R. T.; Pompea, S. M.; Kendall, J. S.; Dugan, C.
2013-04-01
In this session, we discuss different approaches to reaching large audiences. In addition to star parties and astronomy events, the audiences for some of the events include music concerts or festivals, sick children and their families, minority communities, American Indian reservations, and tourist sites such as the National Mall. The goal is to bring science directly to the public—to people who attend astronomy events and to people who do not come to star parties, science museums, or science festivals. These programs allow the entire community to participate in astronomy activities to enhance the public appreciation of science. These programs attract large enthusiastic crowds often with young children participating in these family learning experiences. The public will become more informed, educated, and inspired about astronomy and will also be provided with information that will allow them to continue to learn after this outreach activity. Large and unique audiences often have common problems, and their solutions and the lessons learned will be presented. Interaction with the participants in this session will provide important community feedback used to improve astronomy outreach for large and unique audiences. New ways to expand astronomy outreach to new large audiences will be discussed.
The Production of Temperature and Salinity Variance and Covariance: Implications for Mixing
NASA Astrophysics Data System (ADS)
Schanze, Julian J.
Large-scale thermal forcing and freshwater fluxes play an essential role in sett ing temperature and salinity in the ocean. A number of recent estimates of the global oceanic freshwater balance as well as the global oceanic surface net heat flux are used to investigate the effects of heat- and freshwater forcing at the ocean surface. Such forcing induces changes in both density and density-compensated temperature and salinity changes ('spice'). The ratio of the relative contributions of haline and thermal forcing in the mixed layer is maintained by large-scale surface fluxes, leading to important consequences for mixing in the ocean interior. In a stratified ocean, mixing processes can be either along lines of constant density (isopycnal) or across those lines (diapycnal). The contribution of these processes to the total mixing rate in the ocean can be estimated from the large-scale forcing by evaluating the production of thermal variance, salinity variance and temperature-salinity covariance. Here, I use new estimates of surface fluxes to evaluate these terms and combine them to generate estimates of the production of density and spice variance under the assumption of a linear equation of state. As a consequence, it is possible to estimate the relative importance of isopycnal and diapycnal mixing in the ocean. While isopycnal and diapycnal processes occur on very different length scales, I find that the surface-driven production of density and spice variance requires an approximate equipartition between isopycnal and diapycnal mixing in the ocean interior. In addition, consideration of the full nonlinear equation of state reveals that surface fluxes require an apparent buoyancy gain (expansion) of the ocean, which allows an estimate of the amount of contraction on mixing due to cabbeling in the ocean interior (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)
Shared genetic variance between obesity and white matter integrity in Mexican Americans
Spieker, Elena A.; Kochunov, Peter; Rowland, Laura M.; Sprooten, Emma; Winkler, Anderson M.; Olvera, Rene L.; Almasy, Laura; Duggirala, Ravi; Fox, Peter T.; Blangero, John; Glahn, David C.; Curran, Joanne E.
2015-01-01
Obesity is a chronic metabolic disorder that may also lead to reduced white matter integrity, potentially due to shared genetic risk factors. Genetic correlation analyses were conducted in a large cohort of Mexican American families in San Antonio (N = 761, 58% females, ages 18–81 years; 41.3 ± 14.5) from the Genetics of Brain Structure and Function Study. Shared genetic variance was calculated between measures of adiposity [(body mass index (BMI; kg/m2) and waist circumference (WC; in)] and whole-brain and regional measurements of cerebral white matter integrity (fractional anisotropy). Whole-brain average and regional fractional anisotropy values for 10 major white matter tracts were calculated from high angular resolution diffusion tensor imaging data (DTI; 1.7 × 1.7 × 3 mm; 55 directions). Additive genetic factors explained intersubject variance in BMI (heritability, h2 = 0.58), WC (h2 = 0.57), and FA (h2 = 0.49). FA shared significant portions of genetic variance with BMI in the genu (ρG = −0.25), body (ρG = −0.30), and splenium (ρG = −0.26) of the corpus callosum, internal capsule (ρG = −0.29), and thalamic radiation (ρG = −0.31) (all p's = 0.043). The strongest evidence of shared variance was between BMI/WC and FA in the superior fronto-occipital fasciculus (ρG = −0.39, p = 0.020; ρG = −0.39, p = 0.030), which highlights region-specific variation in neural correlates of obesity. This may suggest that increase in obesity and reduced white matter integrity share common genetic risk factors. PMID:25763009
A unique element resembling a processed pseudogene.
Robins, A J; Wang, S W; Smith, T F; Wells, J R
1986-01-01
We describe a unique DNA element with structural features of a processed pseudogene but with important differences. It is located within an 8.4-kilobase pair region of chicken DNA containing five histone genes, but it is not related to these genes. The presence of terminal repeats, an open reading frame (and stop codon), polyadenylation/processing signal, and a poly(A) rich region about 20 bases 3' to this, together with a lack of 5' promoter motifs all suggest a processed pseudogene. However, no parent gene can be detected in the genome by Southern blotting experiments and, in addition, codon boundary values and mid-base correlations are not consistent with a protein coding region of a eukaryotic gene. The element was detected in DNA from different chickens and in peafowl, but not in quail, pheasant, or turkey.
A unique element resembling a processed pseudogene.
Robins, A J; Wang, S W; Smith, T F; Wells, J R
1986-01-01
We describe a unique DNA element with structural features of a processed pseudogene but with important differences. It is located within an 8.4-kilobase pair region of chicken DNA containing five histone genes, but it is not related to these genes. The presence of terminal repeats, an open reading frame (and stop codon), polyadenylation/processing signal, and a poly(A) rich region about 20 bases 3' to this, together with a lack of 5' promoter motifs all suggest a processed pseudogene. However, no parent gene can be detected in the genome by Southern blotting experiments and, in addition, codon boundary values and mid-base correlations are not consistent with a protein coding region of a eukaryotic gene. The element was detected in DNA from different chickens and in peafowl, but not in quail, pheasant, or turkey. PMID:3941070
Variable variance Preisach model for multilayers with perpendicular magnetic anisotropy
NASA Astrophysics Data System (ADS)
Franco, A. F.; Gonzalez-Fuentes, C.; Morales, R.; Ross, C. A.; Dumas, R.; Åkerman, J.; Garcia, C.
2016-08-01
We present a variable variance Preisach model that fully accounts for the different magnetization processes of a multilayer structure with perpendicular magnetic anisotropy by adjusting the evolution of the interaction variance as the magnetization changes. We successfully compare in a quantitative manner the results obtained with this model to experimental hysteresis loops of several [CoFeB/Pd ] n multilayers. The effect of the number of repetitions and the thicknesses of the CoFeB and Pd layers on the magnetization reversal of the multilayer structure is studied, and it is found that many of the observed phenomena can be attributed to an increase of the magnetostatic interactions and subsequent decrease of the size of the magnetic domains. Increasing the CoFeB thickness leads to the disappearance of the perpendicular anisotropy, and such a minimum thickness of the Pd layer is necessary to achieve an out-of-plane magnetization.
Fidelity between Gaussian mixed states with quantum state quadrature variances
NASA Astrophysics Data System (ADS)
Hai-Long, Zhang; Chun, Zhou; Jian-Hong, Shi; Wan-Su, Bao
2016-04-01
In this paper, from the original definition of fidelity in a pure state, we first give a well-defined expansion fidelity between two Gaussian mixed states. It is related to the variances of output and input states in quantum information processing. It is convenient to quantify the quantum teleportation (quantum clone) experiment since the variances of the input (output) state are measurable. Furthermore, we also give a conclusion that the fidelity of a pure input state is smaller than the fidelity of a mixed input state in the same quantum information processing. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002) and the Foundation of Science and Technology on Information Assurance Laboratory (Grant No. KJ-14-001).
Variance reduction methods applied to deep-penetration problems
Cramer, S.N.
1984-01-01
All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course.
Climate variance influence on the non-stationary plankton dynamics.
Molinero, Juan Carlos; Reygondeau, Gabriel; Bonnet, Delphine
2013-08-01
We examined plankton responses to climate variance by using high temporal resolution data from 1988 to 2007 in the Western English Channel. Climate variability modified both the magnitude and length of the seasonal signal of sea surface temperature, as well as the timing and depth of the thermocline. These changes permeated the pelagic system yielding conspicuous modifications in the phenology of autotroph communities and zooplankton. The climate variance envelope, thus far little considered in climate-plankton studies, is closely coupled with the non-stationary dynamics of plankton, and sheds light on impending ecological shifts and plankton structural changes. Our study calls for the integration of the non-stationary relationship between climate and plankton in prognostic models on the productivity of marine ecosystems.
Fidelity between Gaussian mixed states with quantum state quadrature variances
NASA Astrophysics Data System (ADS)
Hai-Long, Zhang; Chun, Zhou; Jian-Hong, Shi; Wan-Su, Bao
2016-04-01
In this paper, from the original definition of fidelity in a pure state, we first give a well-defined expansion fidelity between two Gaussian mixed states. It is related to the variances of output and input states in quantum information processing. It is convenient to quantify the quantum teleportation (quantum clone) experiment since the variances of the input (output) state are measurable. Furthermore, we also give a conclusion that the fidelity of a pure input state is smaller than the fidelity of a mixed input state in the same quantum information processing. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002) and the Foundation of Science and Technology on Information Assurance Laboratory (Grant No. KJ-14-001).
Analysis of variance in spectroscopic imaging data from human tissues.
Kwak, Jin Tae; Reddy, Rohith; Sinha, Saurabh; Bhargava, Rohit
2012-01-17
The analysis of cell types and disease using Fourier transform infrared (FT-IR) spectroscopic imaging is promising. The approach lacks an appreciation of the limits of performance for the technology, however, which limits both researcher efforts in improving the approach and acceptance by practitioners. One factor limiting performance is the variance in data arising from biological diversity, measurement noise or from other sources. Here we identify the sources of variation by first employing a high throughout sampling platform of tissue microarrays (TMAs) to record a sufficiently large and diverse set data. Next, a comprehensive set of analysis of variance (ANOVA) models is employed to analyze the data. Estimating the portions of explained variation, we quantify the primary sources of variation, find the most discriminating spectral metrics, and recognize the aspects of the technology to improve. The study provides a framework for the development of protocols for clinical translation and provides guidelines to design statistically valid studies in the spectroscopic analysis of tissue.
Female copying increases the variance in male mating success.
Wade, M J; Pruett-Jones, S G
1990-08-01
Theoretical models of sexual selection assume that females choose males independently of the actions and choice of other individual females. Variance in male mating success in promiscuous species is thus interpreted as a result of phenotypic differences among males which females perceive and to which they respond. Here we show that, if some females copy the behavior of other females in choosing mates, the variance in male mating success and therefore the opportunity for sexual selection is greatly increased. Copying behavior is most likely in non-resource-based harem and lek mating systems but may occur in polygynous, territorial systems as well. It can be shown that copying behavior by females is an adaptive alternative to random choice whenever there is a cost to mate choice. We develop a statistical means of estimating the degree of female copying in natural populations where it occurs. PMID:2377613
Compounding approach for univariate time series with nonstationary variances
NASA Astrophysics Data System (ADS)
Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich
2015-12-01
A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.
A Variance Based Active Learning Approach for Named Entity Recognition
NASA Astrophysics Data System (ADS)
Hassanzadeh, Hamed; Keyvanpour, Mohammadreza
The cost of manually annotating corpora is one of the significant issues in many text based tasks such as text mining, semantic annotation and generally information extraction. Active Learning is an approach that deals with reduction of labeling costs. In this paper we proposed an effective active learning approach based on minimal variance that reduces manual annotation cost by using a small number of manually labeled examples. In our approach we use a confidence measure based on the model's variance that reaches a considerable accuracy for annotating entities. Conditional Random Field (CRF) is chosen as the underlying learning model due to its promising performance in many sequence labeling tasks. The experiments show that the proposed method needs considerably fewer manual labeled samples to produce a desirable result.
No evidence for anomalously low variance circles on the sky
Moss, Adam; Scott, Douglas; Zibin, James P. E-mail: dscott@phas.ubc.ca
2011-04-01
In a recent paper, Gurzadyan and Penrose claim to have found directions on the sky centred on which are circles of anomalously low variance in the cosmic microwave background (CMB). These features are presented as evidence for a particular picture of the very early Universe. We attempted to repeat the analysis of these authors, and we can indeed confirm that such variations do exist in the temperature variance for annuli around points in the data. However, we find that this variation is entirely expected in a sky which contains the usual CMB anisotropies. In other words, properly simulated Gaussian CMB data contain just the sorts of variations claimed. Gurzadyan and Penrose have not found evidence for pre-Big Bang phenomena, but have simply re-discovered that the CMB contains structure.
A surface layer variance heat budget for ENSO
NASA Astrophysics Data System (ADS)
Boucharel, Julien; Timmermann, Axel; Santoso, Agus; England, Matthew H.; Jin, Fei-Fei; Balmaseda, Magdalena A.
2015-05-01
Characteristics of the El Niño-Southern Oscillation (ENSO), such as frequency, propagation, spatial extent, and amplitude, strongly depend on the climatological background state of the tropical Pacific. Multidecadal changes in the ocean mean state are hence likely to modulate ENSO properties. To better link background state variations with low-frequency amplitude changes of ENSO, we develop a diagnostic framework that determines locally the contributions of different physical feedback terms on the ocean surface temperature variance. Our analysis shows that multidecadal changes of ENSO variance originate from the delicate balance between the background-state-dependent positive thermocline feedback and the atmospheric damping of sea surface temperatures anomalies. The role of higher-order processes and atmospheric and oceanic nonlinearities is also discussed. The diagnostic tool developed here can be easily applied to other tropical ocean areas and climate phenomena.
Response variance in functional maps: neural darwinism revisited.
Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei
2013-01-01
The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.
Symbols are not uniquely human.
Ribeiro, Sidarta; Loula, Angelo; de Araújo, Ivan; Gudwin, Ricardo; Queiroz, João
2007-01-01
Modern semiotics is a branch of logics that formally defines symbol-based communication. In recent years, the semiotic classification of signs has been invoked to support the notion that symbols are uniquely human. Here we show that alarm-calls such as those used by African vervet monkeys (Cercopithecus aethiops), logically satisfy the semiotic definition of symbol. We also show that the acquisition of vocal symbols in vervet monkeys can be successfully simulated by a computer program based on minimal semiotic and neurobiological constraints. The simulations indicate that learning depends on the tutor-predator ratio, and that apprentice-generated auditory mistakes in vocal symbol interpretation have little effect on the learning rates of apprentices (up to 80% of mistakes are tolerated). In contrast, just 10% of apprentice-generated visual mistakes in predator identification will prevent any vocal symbol to be correctly associated with a predator call in a stable manner. Tutor unreliability was also deleterious to vocal symbol learning: a mere 5% of "lying" tutors were able to completely disrupt symbol learning, invariably leading to the acquisition of incorrect associations by apprentices. Our investigation corroborates the existence of vocal symbols in a non-human species, and indicates that symbolic competence emerges spontaneously from classical associative learning mechanisms when the conditioned stimuli are self-generated, arbitrary and socially efficacious. We propose that more exclusive properties of human language, such as syntax, may derive from the evolution of higher-order domains for neural association, more removed from both the sensory input and the motor output, able to support the gradual complexification of grammatical categories into syntax.
Constraining the local variance of H0 from directional analyses
NASA Astrophysics Data System (ADS)
Bengaly, C. A. P., Jr.
2016-04-01
We evaluate the local variance of the Hubble Constant H0 with low-z Type Ia Supernovae (SNe). Our analyses are performed using a hemispherical comparison method in order to test whether taking the bulk flow motion into account can reconcile the measurement of the Hubble Constant H0 from standard candles (H0 = 73.8±2.4 km s-1 Mpc -1) with that of the Planck's Cosmic Microwave Background data (H0 = 67.8 ± 0.9km s-1 Mpc-1). We obtain that H0 ranges from 68.9±0.5 km s-1 Mpc-1 to 71.2±0.7 km s-1 Mpc-1 through the celestial sphere (1σ uncertainty), implying a Hubble Constant maximal variance of δH0 = (2.30±0.86) km s-1 Mpc-1 towards the (l,b) = (315°,27°) direction. Interestingly, this result agrees with the bulk flow direction estimates found in the literature, as well as previous evaluations of the H0 variance due to the presence of nearby inhomogeneities. We assess the statistical significance of this result with different prescriptions of Monte Carlo simulations, obtaining moderate statistical significance, i.e., 68.7% confidence level (CL) for such variance. Furthermore, we test the hypothesis of a higher H0 value in the presence of a bulk flow velocity dipole, finding some evidence for this result which, however, cannot be claimed to be significant due to the current large uncertainty in the SNe distance modulus. Then, we conclude that the tension between different H0 determinations can plausibly be caused to the bulk flow motion of the local Universe, even though the current incompleteness of the SNe data set, both in terms of celestial coverage and distance uncertainties, does not allow a high statistical significance for these results or a definitive conclusion about this issue.
End-state comfort and joint configuration variance during reaching.
Solnik, Stanislaw; Pazin, Nemanja; Coelho, Chase J; Rosenbaum, David A; Scholz, John P; Zatsiorsky, Vladimir M; Latash, Mark L
2013-03-01
This study joined two approaches to motor control. The first approach comes from cognitive psychology and is based on the idea that goal postures and movements are chosen to satisfy task-specific constraints. The second approach comes from the principle of motor abundance and is based on the idea that control of apparently redundant systems is associated with the creation of multi-element synergies stabilizing important performance variables. The first approach has been tested by relying on psychophysical ratings of comfort. The second approach has been tested by estimating variance along different directions in the space of elemental variables such as joint postures. The two approaches were joined here. Standing subjects performed series of movements in which they brought a hand-held pointer to each of four targets oriented within a frontal plane, close to or far from the body. The subjects were asked to rate the comfort of the final postures, and the variance of their joint configurations during the steady state following pointing was quantified with respect to pointer endpoint position and pointer orientation. The subjects showed consistent patterns of comfort ratings among the targets, and all movements were characterized by multi-joint synergies stabilizing both pointer endpoint position and orientation. Contrary to what was expected, less comfortable postures had higher joint configuration variance than did more comfortable postures without major changes in the synergy indices. Multi-joint synergies stabilized the pointer position and orientation similarly across a range of comfortable/uncomfortable postures. The results are interpreted in terms conducive to the two theoretical frameworks underlying this work, one focusing on comfort ratings reflecting mean postures adopted for different targets and the other focusing on indices of joint configuration variance. PMID:23288326
End-state comfort and joint configuration variance during reaching
Solnik, Stanislaw; Pazin, Nemanja; Coelho, Chase J.; Rosenbaum, David A.; Scholz, John P.; Zatsiorsky, Vladimir M.; Latash, Mark L.
2013-01-01
This study joined two approaches to motor control. The first approach comes from cognitive psychology and is based on the idea that goal postures and movements are chosen to satisfy task-specific constraints. The second approach comes from the principle of motor abundance and is based on the idea that control of apparently redundant systems is associated with the creation of multi-element synergies stabilizing important performance variables. The first approach has been tested by relying on psychophysical ratings of comfort. The second approach has been tested by estimating variance along different directions in the space of elemental variables such as joint postures. The two approaches were joined here. Standing subjects performed series of movements in which they brought a hand-held pointer to each of four targets oriented within a frontal plane, close to or far from the body. The subjects were asked to rate the comfort of the final postures, and the variance of their joint configurations during the steady state following pointing was quantified with respect to pointer endpoint position and pointer orientation. The subjects showed consistent patterns of comfort ratings among the targets, and all movements were characterized by multi-joint synergies stabilizing both pointer endpoint position and orientation. Contrary to what was expected, less comfortable postures had higher joint configuration variance than did more comfortable postures without major changes in the synergy indices. Multi-joint synergies stabilized the pointer position and orientation similarly across a range of comfortable/uncomfortable postures. The results are interpreted in terms conducive to the two theoretical frameworks underlying this work, one focusing on comfort ratings reflecting mean postures adopted for different targets and the other focusing on indices of joint configuration variance. PMID:23288326
Analysis of Variance in the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
Deloach, Richard
2010-01-01
This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.
Unique contributions of metacognition and cognition to depressive symptoms.
Yilmaz, Adviye Esin; Gençöz, Tülin; Wells, Adrian
2015-01-01
This study attempts to examine the unique contributions of "cognitions" or "metacognitions" to depressive symptoms while controlling for their intercorrelations and comorbid anxiety. Two-hundred-and-fifty-one university students participated in the study. Two complementary hierarchical multiple regression analyses were performed, in which symptoms of depression were regressed on the dysfunctional attitudes (DAS-24 subscales) and metacognition scales (Negative Beliefs about Rumination Scale [NBRS] and Positive Beliefs about Rumination Scale [PBRS]). Results showed that both NBRS and PBRS individually explained a significant amount of variance in depressive symptoms above and beyond dysfunctional schemata while controlling for anxiety. Although dysfunctional attitudes as a set significantly predicted depressive symptoms after anxiety and metacognitions were controlled for, they were weaker than metacognitive variables and none of the DAS-24 subscales contributed individually. Metacognitive beliefs about ruminations appeared to contribute more to depressive symptoms than dysfunctional beliefs in the "cognitive" domain.
White matter morphometric changes uniquely predict children's reading acquisition.
Myers, Chelsea A; Vandermosten, Maaike; Farris, Emily A; Hancock, Roeland; Gimenez, Paul; Black, Jessica M; Casto, Brandi; Drahos, Miroslav; Tumber, Mandeep; Hendren, Robert L; Hulme, Charles; Hoeft, Fumiko
2014-10-01
This study examined whether variations in brain development between kindergarten and Grade 3 predicted individual differences in reading ability at Grade 3. Structural MRI measurements indicated that increases in the volume of two left temporo-parietal white matter clusters are unique predictors of reading outcomes above and beyond family history, socioeconomic status, and cognitive and preliteracy measures at baseline. Using diffusion MRI, we identified the left arcuate fasciculus and superior corona radiata as key fibers within the two clusters. Bias-free regression analyses using regions of interest from prior literature revealed that volume changes in temporo-parietal white matter, together with preliteracy measures, predicted 56% of the variance in reading outcomes. Our findings demonstrate the important contribution of developmental differences in areas of left dorsal white matter, often implicated in phonological processing, as a sensitive early biomarker for later reading abilities, and by extension, reading difficulties.
White matter morphometric changes uniquely predict children's reading acquisition.
Myers, Chelsea A; Vandermosten, Maaike; Farris, Emily A; Hancock, Roeland; Gimenez, Paul; Black, Jessica M; Casto, Brandi; Drahos, Miroslav; Tumber, Mandeep; Hendren, Robert L; Hulme, Charles; Hoeft, Fumiko
2014-10-01
This study examined whether variations in brain development between kindergarten and Grade 3 predicted individual differences in reading ability at Grade 3. Structural MRI measurements indicated that increases in the volume of two left temporo-parietal white matter clusters are unique predictors of reading outcomes above and beyond family history, socioeconomic status, and cognitive and preliteracy measures at baseline. Using diffusion MRI, we identified the left arcuate fasciculus and superior corona radiata as key fibers within the two clusters. Bias-free regression analyses using regions of interest from prior literature revealed that volume changes in temporo-parietal white matter, together with preliteracy measures, predicted 56% of the variance in reading outcomes. Our findings demonstrate the important contribution of developmental differences in areas of left dorsal white matter, often implicated in phonological processing, as a sensitive early biomarker for later reading abilities, and by extension, reading difficulties. PMID:25212581
Reduced Variance for Material Sources in Implicit Monte Carlo
Urbatsch, Todd J.
2012-06-25
Implicit Monte Carlo (IMC), a time-implicit method due to Fleck and Cummings, is used for simulating supernovae and inertial confinement fusion (ICF) systems where x-rays tightly and nonlinearly interact with hot material. The IMC algorithm represents absorption and emission within a timestep as an effective scatter. Similarly, the IMC time-implicitness splits off a portion of a material source directly into the radiation field. We have found that some of our variance reduction and particle management schemes will allow large variances in the presence of small, but important, material sources, as in the case of ICF hot electron preheat sources. We propose a modification of our implementation of the IMC method in the Jayenne IMC Project. Instead of battling the sampling issues associated with a small source, we bypass the IMC implicitness altogether and simply deterministically update the material state with the material source if the temperature of the spatial cell is below a user-specified cutoff. We describe the modified method and present results on a test problem that show the elimination of variance for small sources.
Estimating Predictive Variance for Statistical Gas Distribution Modelling
Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo
2009-05-23
Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.
A candidate mechanism underlying the variance of interictal spike propagation
Sabolek, Helen R; Swiercz, Waldemar B.; Lillis, Kyle; Cash, Sydney S.; Huberfeld, Gilles; Zhao, Grace; Marie, Linda Ste.; Clemenceau, Stéphane; Barsh, Greg; Miles, Richard; Staley, Kevin J.
2012-01-01
Synchronous activation of neural networks is an important physiological mechanism, and dysregulation of synchrony forms the basis of epilepsy. We analyzed the propagation of synchronous activity through chronically epileptic neural networks. Electrocortigraphic recordings from epileptic patients demonstrate remarkable variance in the pathways of propagation between sequential interictal spikes (IIS). Calcium imaging in chronically epileptic slice cultures demonstrates that pathway variance depends on the presence of GABAergic inhibition and that spike propagation becomes stereotyped following GABA-R blockade. Computer modeling suggests that GABAergic quenching of local network activations leaves behind regions of refractory neurons, whose late recruitment forms the anatomical basis of variability during subsequent network activation. Targeted path scanning of slice cultures confirmed local activations, while ex vivo recordings of human epileptic tissue confirmed the dependence of interspike variance on GABA-mediated inhibition. These data support the hypothesis that the paths by which synchronous activity spread through an epileptic network change with each activation, based on the recent history of localized activity that has been successfully inhibited. PMID:22378874
Genetic variance of tolerance and the toxicant threshold model.
Tanaka, Yoshinari; Mano, Hiroyuki; Tatsuta, Haruki
2012-04-01
A statistical genetics method is presented for estimating the genetic variance (heritability) of tolerance to pollutants on the basis of a standard acute toxicity test conducted on several isofemale lines of cladoceran species. To analyze the genetic variance of tolerance in the case when the response is measured as a few discrete states (quantal endpoints), the authors attempted to apply the threshold character model in quantitative genetics to the threshold model separately developed in ecotoxicology. The integrated threshold model (toxicant threshold model) assumes that the response of a particular individual occurs at a threshold toxicant concentration and that the individual tolerance characterized by the individual's threshold value is determined by genetic and environmental factors. As a case study, the heritability of tolerance to p-nonylphenol in the cladoceran species Daphnia galeata was estimated by using the maximum likelihood method and nested analysis of variance (ANOVA). Broad-sense heritability was estimated to be 0.199 ± 0.112 by the maximum likelihood method and 0.184 ± 0.089 by ANOVA; both results implied that the species examined had the potential to acquire tolerance to this substance by evolutionary change.
Estimating Predictive Variance for Statistical Gas Distribution Modelling
NASA Astrophysics Data System (ADS)
Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo
2009-05-01
Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.
VAPOR: variance-aware per-pixel optimal resource allocation.
Eisenberg, Yiftach; Zhai, Fan; Pappas, Thrasyvoulos N; Berry, Randall; Katsaggelos, Aggelos K
2006-02-01
Characterizing the video quality seen by an end-user is a critical component of any video transmission system. In packet-based communication systems, such as wireless channels or the Internet, packet delivery is not guaranteed. Therefore, from the point-of-view of the transmitter, the distortion at the receiver is a random variable. Traditional approaches have primarily focused on minimizing the expected value of the end-to-end distortion. This paper explores the benefits of accounting for not only the mean, but also the variance of the end-to-end distortion when allocating limited source and channel resources. By accounting for the variance of the distortion, the proposed approach increases the reliability of the system by making it more likely that what the end-user sees, closely resembles the mean end-to-end distortion calculated at the transmitter. Experimental results demonstrate that variance-aware resource allocation can help limit error propagation and is more robust to channel-mismatch than approaches whose goal is to strictly minimize the expected distortion. PMID:16479799
Putka, Dan J; Hoffman, Brian J
2013-01-01
Though considerable research has evaluated the functioning of assessment center (AC) ratings, surprisingly little research has articulated and uniquely estimated the components of reliable and unreliable variance that underlie such ratings. The current study highlights limitations of existing research for estimating components of reliable and unreliable variance in AC ratings. It provides a comprehensive empirical decomposition of variance in AC ratings that: (a) explicitly accounts for assessee-, dimension-, exercise-, and assessor-related effects, (b) does so with 3 large sets of operational data from a multiyear AC program, and (c) avoids many analytic limitations and confounds that have plagued the AC literature to date. In doing so, results show that (a) the extant AC literature has masked the contribution of sizable, substantively meaningful sources of variance in AC ratings, (b) various forms of assessor bias largely appear trivial, and (c) there is far more systematic, nuanced variance present in AC ratings than previous research indicates. Furthermore, this study also illustrates how the composition of reliable and unreliable variance heavily depends on the level to which assessor ratings are aggregated (e.g., overall AC-level, dimension-level, exercise-level) and the generalizations one desires to make based on those ratings. The implications of this study for future AC research and practice are discussed. PMID:23244226
Ruiz, A.; Barbadilla, A.
1995-01-01
Using Cockerham`s approach of orthogonal scales, we develop genetic models for the effect of an arbitrary number of multiallelic quantitative trait loci (QTLs) or neutral marker loci (NMLs) upon any number of quantitative traits. These models allow the unbiased estimation of the contributions of a set of marker loci to the additive and dominance variances and covariances among traits in a random mating population. The method has been applied to an analysis of allozyme and quantitative data from the European oyster. The contribution of a set of marker loci may either be real, when the markers are actually QTLs, or apparent, when they are NMLs that are in linkage disequilibrium with hidden QTLs. Our results show that the additive and dominance variances contributed by a set of NMLs are always minimum estimates of the corresponding variances contributed by the associated QTLs. In contrast, the apparent contribution of the NMLs to the additive and dominance covariances between two traits may be larger than, equal to or lower than the actual contributions of the QTLs. We also derive an expression for the expected variance explained by the correlation between a quantitative trait and multilocus heterozygosity. This correlation explains only a part of the genetic variance contributed by the markers, i.e., in general, a combination of additive and dominance variances and, thus, provides only very limited information relative to the method supplied here. 94 refs., 2 figs., 5 tabs.
Manufacturing unique glasses in space
NASA Technical Reports Server (NTRS)
Happe, R. P.
1976-01-01
An air suspension melting technique is described for making glasses from substances which to date have been observed only in the crystalline condition. A laminar flow vertical wind tunnel was constructed for suspending oxide melts that were melted using the energy from a carbon dioxide laser beam. By this method it is possible to melt many high-melting-point materials without interaction between the melt and crucible material. In addition, space melting permits cooling to suppress crystal growth. If a sufficient amount of under cooling is accompanied by a sufficient increase in viscosity, crystallization will be avoided entirely and glass will result.
Cavalié, Olivier; Vernotte, François
2016-04-01
The Allan variance was introduced 50 years ago for analyzing the stability of frequency standards. In addition to its metrological interest, it may be also considered as an estimator of the large trends of the power spectral density (PSD) of frequency deviation. For instance, the Allan variance is able to discriminate different types of noise characterized by different power laws in the PSD. The Allan variance was also used in other fields than time and frequency metrology: for more than 20 years, it has been used in accelerometry, geophysics, geodesy, astrophysics, and even finances. However, it seems that up to now, it has been exclusively applied for time series analysis. We propose here to use the Allan variance on spatial data. Interferometric synthetic aperture radar (InSAR) is used in geophysics to image ground displacements in space [over the synthetic aperture radar (SAR) image spatial coverage] and in time thanks to the regular SAR image acquisitions by dedicated satellites. The main limitation of the technique is the atmospheric disturbances that affect the radar signal while traveling from the sensor to the ground and back. In this paper, we propose to use the Allan variance for analyzing spatial data from InSAR measurements. The Allan variance was computed in XY mode as well as in radial mode for detecting different types of behavior for different space-scales, in the same way as the different types of noise versus the integration time in the classical time and frequency application. We found that radial Allan variance is the more appropriate way to have an estimator insensitive to the spatial axis and we applied it on SAR data acquired over eastern Turkey for the period 2003-2011. Spatial Allan variance allowed us to well characterize noise features, classically found in InSAR such as phase decorrelation producing white noise or atmospheric delays, behaving like a random walk signal. We finally applied the spatial Allan variance to an InSAR time
Cavalié, Olivier; Vernotte, François
2016-04-01
The Allan variance was introduced 50 years ago for analyzing the stability of frequency standards. In addition to its metrological interest, it may be also considered as an estimator of the large trends of the power spectral density (PSD) of frequency deviation. For instance, the Allan variance is able to discriminate different types of noise characterized by different power laws in the PSD. The Allan variance was also used in other fields than time and frequency metrology: for more than 20 years, it has been used in accelerometry, geophysics, geodesy, astrophysics, and even finances. However, it seems that up to now, it has been exclusively applied for time series analysis. We propose here to use the Allan variance on spatial data. Interferometric synthetic aperture radar (InSAR) is used in geophysics to image ground displacements in space [over the synthetic aperture radar (SAR) image spatial coverage] and in time thanks to the regular SAR image acquisitions by dedicated satellites. The main limitation of the technique is the atmospheric disturbances that affect the radar signal while traveling from the sensor to the ground and back. In this paper, we propose to use the Allan variance for analyzing spatial data from InSAR measurements. The Allan variance was computed in XY mode as well as in radial mode for detecting different types of behavior for different space-scales, in the same way as the different types of noise versus the integration time in the classical time and frequency application. We found that radial Allan variance is the more appropriate way to have an estimator insensitive to the spatial axis and we applied it on SAR data acquired over eastern Turkey for the period 2003-2011. Spatial Allan variance allowed us to well characterize noise features, classically found in InSAR such as phase decorrelation producing white noise or atmospheric delays, behaving like a random walk signal. We finally applied the spatial Allan variance to an InSAR time
NASA Technical Reports Server (NTRS)
1999-01-01
Mainstream Engineering Corporation was awarded Phase I and Phase II contracts from Goddard Space Flight Center's Small Business Innovation Research (SBIR) program in early 1990. With support from the SBIR program, Mainstream Engineering Corporation has developed a unique low cost additive, QwikBoost (TM), that increases the performance of air conditioners, heat pumps, refrigerators, and freezers. Because of the energy and environmental benefits of QwikBoost, Mainstream received the Tibbetts Award at a White House Ceremony on October 16, 1997. QwikBoost was introduced at the 1998 International Air Conditioning, Heating, and Refrigeration Exposition. QwikBoost is packaged in a handy 3-ounce can (pressurized with R-134a) and will be available for automotive air conditioning systems in summer 1998.
Unique interactive projection display screen
Veligdan, J.T.
1997-11-01
Projection systems continue to be the best method to produce large (1 meter and larger) displays. However, in order to produce a large display, considerable volume is typically required. The Polyplanar Optic Display (POD) is a novel type of projection display screen, which for the first time, makes it possible to produce a large projection system that is self-contained and only inches thick. In addition, this display screen is matte black in appearance allowing it to be used in high ambient light conditions. This screen is also interactive and can be remotely controlled via an infrared optical pointer resulting in mouse-like control of the display. Furthermore, this display need not be flat since it can be made curved to wrap around a viewer as well as being flexible.
Vrshek-Schallhorn, Suzanne; Stroud, Catherine B; Mineka, Susan; Hammen, Constance; Zinbarg, Richard E; Wolitzky-Taylor, Kate; Craske, Michelle G
2015-11-01
Few studies comprehensively evaluate which types of life stress are most strongly associated with depressive episode onsets, over and above other forms of stress, and comparisons between acute and chronic stress are particularly lacking. Past research implicates major (moderate to severe) stressful life events (SLEs), and to a lesser extent, interpersonal forms of stress; research conflicts on whether dependent or independent SLEs are more potent, but theory favors dependent SLEs. The present study used 5 years of annual diagnostic and life stress interviews of chronic stress and SLEs from 2 separate samples (Sample 1 N = 432; Sample 2 N = 146) transitioning into emerging adulthood; 1 sample also collected early adversity interviews. Multivariate analyses simultaneously examined multiple forms of life stress to test hypotheses that all major SLEs, then particularly interpersonal forms of stress, and then dependent SLEs would contribute unique variance to major depressive episode (MDE) onsets. Person-month survival analysis consistently implicated chronic interpersonal stress and major interpersonal SLEs as statistically unique predictors of risk for MDE onset. In addition, follow-up analyses demonstrated temporal precedence for chronic stress; tested differences by gender; showed that recent chronic stress mediates the relationship between adolescent adversity and later MDE onsets; and revealed interactions of several forms of stress with socioeconomic status (SES). Specifically, as SES declined, there was an increasing role for noninterpersonal chronic stress and noninterpersonal major SLEs, coupled with a decreasing role for interpersonal chronic stress. Implications for future etiological research were discussed.
Vrshek-Schallhorn, Suzanne; Stroud, Catherine B.; Mineka, Susan; Hammen, Constance; Zinbarg, Richard; Wolitzky-Taylor, Kate; Craske, Michelle G.
2016-01-01
Few studies comprehensively evaluate which types of life stress are most strongly associated with depressive episode onsets, over and above other forms of stress, and comparisons between acute and chronic stress are particularly lacking. Past research implicates major (moderate to severe) stressful life events (SLEs), and to a lesser extent, interpersonal forms of stress; research conflicts on whether dependent or independent SLEs are more potent, but theory favors dependent SLEs. The present study used five years of annual diagnostic and life stress interviews of chronic stress and SLEs from two separate samples (Sample 1 N = 432; Sample 2 N = 146) transitioning into emerging adulthood; one sample also collected early adversity interviews. Multivariate analyses simultaneously examined multiple forms of life stress to test hypotheses that all major SLEs, then particularly interpersonal forms of stress, and then dependent SLEs would contribute unique variance to major depressive episode (MDE) onsets. Person-month survival analysis consistently implicated chronic interpersonal stress and major interpersonal SLEs as statistically unique predictors of risk for MDE onset. In addition, follow-up analyses demonstrated temporal precedence for chronic stress; tested differences by gender; showed that recent chronic stress mediates the relationship between adolescent adversity and later MDE onsets; and revealed interactions of several forms of stress with socioeconomic status (SES). Specifically, as SES declined, there was an increasing role for non-interpersonal chronic stress and non-interpersonal major SLEs, coupled with a decreasing role for interpersonal chronic stress. Implications for future etiological research were discussed. PMID:26301973
ERIC Educational Resources Information Center
Liu, Duo; Chen, Xi; Chung, Kevin K. H.
2015-01-01
This study examined the relation between the performance in a visual search task and reading ability in 92 third-grade Hong Kong Chinese children. The visual search task, which is considered a measure of visual-spatial attention, accounted for unique variance in Chinese character reading after controlling for age, nonverbal intelligence,…
Monte Carlo calculation of specific absorbed fractions: variance reduction techniques
NASA Astrophysics Data System (ADS)
Díaz-Londoño, G.; García-Pareja, S.; Salvat, F.; Lallena, A. M.
2015-04-01
The purpose of the present work is to calculate specific absorbed fractions using variance reduction techniques and assess the effectiveness of these techniques in improving the efficiency (i.e. reducing the statistical uncertainties) of simulation results in cases where the distance between the source and the target organs is large and/or the target organ is small. The variance reduction techniques of interaction forcing and an ant colony algorithm, which drives the application of splitting and Russian roulette, were applied in Monte Carlo calculations performed with the code penelope for photons with energies from 30 keV to 2 MeV. In the simulations we used a mathematical phantom derived from the well-known MIRD-type adult phantom. The thyroid gland was assumed to be the source organ and urinary bladder, testicles, uterus and ovaries were considered as target organs. Simulations were performed, for each target organ and for photons with different energies, using these variance reduction techniques, all run on the same processor and during a CPU time of 1.5 · 105 s. For energies above 100 keV both interaction forcing and the ant colony method allowed reaching relative uncertainties of the average absorbed dose in the target organs below 4% in all studied cases. When these two techniques were used together, the uncertainty was further reduced, by a factor of 0.5 or less. For photons with energies below 100 keV, an adapted initialization of the ant colony algorithm was required. By using interaction forcing and the ant colony algorithm, realistic values of the specific absorbed fractions can be obtained with relative uncertainties small enough to permit discriminating among simulations performed with different Monte Carlo codes and phantoms. The methodology described in the present work can be employed to calculate specific absorbed fractions for arbitrary arrangements, i.e. energy spectrum of primary radiation, phantom model and source and target organs.
Regression between earthquake magnitudes having errors with known variances
NASA Astrophysics Data System (ADS)
Pujol, Jose
2016-07-01
Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.
Fringe biasing: A variance reduction technique for optically thick meshes
Smedley-Stevenson, R. P.
2013-07-01
Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)
Multi-observable Uncertainty Relations in Product Form of Variances
NASA Astrophysics Data System (ADS)
Qin, Hui-Hui; Fei, Shao-Ming; Li-Jost, Xianqing
2016-08-01
We investigate the product form uncertainty relations of variances for n (n ≥ 3) quantum observables. In particular, tight uncertainty relations satisfied by three observables has been derived, which is shown to be better than the ones derived from the strengthened Heisenberg and the generalized Schrödinger uncertainty relations, and some existing uncertainty relation for three spin-half operators. Uncertainty relation of arbitrary number of observables is also derived. As an example, the uncertainty relation satisfied by the eight Gell-Mann matrices is presented.
Multi-observable Uncertainty Relations in Product Form of Variances.
Qin, Hui-Hui; Fei, Shao-Ming; Li-Jost, Xianqing
2016-01-01
We investigate the product form uncertainty relations of variances for n (n ≥ 3) quantum observables. In particular, tight uncertainty relations satisfied by three observables has been derived, which is shown to be better than the ones derived from the strengthened Heisenberg and the generalized Schrödinger uncertainty relations, and some existing uncertainty relation for three spin-half operators. Uncertainty relation of arbitrary number of observables is also derived. As an example, the uncertainty relation satisfied by the eight Gell-Mann matrices is presented. PMID:27498851
AVATAR -- Automatic variance reduction in Monte Carlo calculations
Van Riper, K.A.; Urbatsch, T.J.; Soran, P.D.
1997-05-01
AVATAR{trademark} (Automatic Variance And Time of Analysis Reduction), accessed through the graphical user interface application, Justine{trademark}, is a superset of MCNP{trademark} that automatically invokes THREEDANT{trademark} for a three-dimensional deterministic adjoint calculation on a mesh independent of the Monte Carlo geometry, calculates weight windows, and runs MCNP. Computational efficiency increases by a factor of 2 to 5 for a three-detector oil well logging tool model. Human efficiency increases dramatically, since AVATAR eliminates the need for deep intuition and hours of tedious handwork.
Critical points of multidimensional random Fourier series: Variance estimates
NASA Astrophysics Data System (ADS)
Nicolaescu, Liviu I.
2016-08-01
We investigate the number of critical points of a Gaussian random smooth function uɛ on the m-torus Tm ≔ ℝm/ℤm approximating the Gaussian white noise as ɛ → 0. Let N(uɛ) denote the number of critical points of uɛ. We prove the existence of constants C, C' such that as ɛ goes to zero, the expectation of the random variable ɛmN(uɛ) converges to C, while its variance is extremely small and behaves like C'ɛm.
Multi-observable Uncertainty Relations in Product Form of Variances
Qin, Hui-Hui; Fei, Shao-Ming; Li-Jost, Xianqing
2016-01-01
We investigate the product form uncertainty relations of variances for n (n ≥ 3) quantum observables. In particular, tight uncertainty relations satisfied by three observables has been derived, which is shown to be better than the ones derived from the strengthened Heisenberg and the generalized Schrödinger uncertainty relations, and some existing uncertainty relation for three spin-half operators. Uncertainty relation of arbitrary number of observables is also derived. As an example, the uncertainty relation satisfied by the eight Gell-Mann matrices is presented. PMID:27498851
Simulation Study Using a New Type of Sample Variance
NASA Technical Reports Server (NTRS)
Howe, D. A.; Lainson, K. J.
1996-01-01
We evaluate with simulated data a new type of sample variance for the characterization of frequency stability. The new statistic (referred to as TOTALVAR and its square root TOTALDEV) is a better predictor of long-term frequency variations than the present sample Allan deviation. The statistical model uses the assumption that a time series of phase or frequency differences is wrapped (periodic) with overall frequency difference removed. We find that the variability at long averaging times is reduced considerably for the five models of power-law noise commonly encountered with frequency standards and oscillators.
Analysis of variance of thematic mapping experiment data.
Rosenfield, G.H.
1981-01-01
As an example of the methodology, data from an experiment using three scales of land-use and land-cover mapping have been analyzed. The binomial proportions of correct interpretations have been analyzed untransformed and transformed by both the arcsine and the logit transformations. A weighted analysis of variance adjustment has been used. There is evidence of a significant difference among the three scales of mapping (1:24 000, 1:100 000 and 1:250 000) using the transformed data. Multiple range tests showed that all three scales are different for the arcsine transformed data. - from Author
Two-dimensional finite-element temperature variance analysis
NASA Technical Reports Server (NTRS)
Heuser, J. S.
1972-01-01
The finite element method is extended to thermal analysis by forming a variance analysis of temperature results so that the sensitivity of predicted temperatures to uncertainties in input variables is determined. The temperature fields within a finite number of elements are described in terms of the temperatures of vertices and the variational principle is used to minimize the integral equation describing thermal potential energy. A computer calculation yields the desired solution matrix of predicted temperatures and provides information about initial thermal parameters and their associated errors. Sample calculations show that all predicted temperatures are most effected by temperature values along fixed boundaries; more accurate specifications of these temperatures reduce errors in thermal calculations.
Variance reduction in Monte Carlo analysis of rarefied gas diffusion.
NASA Technical Reports Server (NTRS)
Perlmutter, M.
1972-01-01
The problem of rarefied diffusion between parallel walls is solved using the Monte Carlo method. The diffusing molecules are evaporated or emitted from one of the two parallel walls and diffuse through another molecular species. The Monte Carlo analysis treats the diffusing molecule as undergoing a Markov random walk, and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs, the expected Markov walk payoff is retained but its variance is reduced so that the Monte Carlo result has a much smaller error.
Variance reduction in Monte Carlo analysis of rarefied gas diffusion
NASA Technical Reports Server (NTRS)
Perlmutter, M.
1972-01-01
The present analysis uses the Monte Carlo method to solve the problem of rarefied diffusion between parallel walls. The diffusing molecules are evaporated or emitted from one of two parallel walls and diffused through another molecular species. The analysis treats the diffusing molecule as undergoing a Markov random walk and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs the expected Markov walk payoff is retained but its variance is reduced so that the M. C. result has a much smaller error.
Chloroembryos: a unique photosynthesis system.
Puthur, Jos T; Shackira, A M; Saradhi, P Pardha; Bartels, Dorothea
2013-09-01
The embryos of some angiosperm taxa contain chlorophyll and this chlorophyllous stage is persisting until the embryo matures (further referred as chloroembryos). Besides being chlorophyllous, these embryos seem to have the ability to photosynthesize. This suggests that the chlorophyllous state of the embryo has an important role in seed development. The photosynthesis of chloroembryos is highly shade adaptive in nature as it is embedded within the supporting tissues (several layers of pod wall, seed coat and endosperm). Moreover, these chloroembryos are developing in a highly osmotic environment, and contain various components of the photosynthetic machinery. Detailed studies were performed in these chloroembryos in order to elucidate the structure of the chloroplasts, pigment composition, the photochemical activities, the rate of carbon assimilation and also the shade adaptive features. It has been shown that the respired CO2 within these chloroembryos is recycled by the efficient photosynthetic components of the chloroembryos and thus potentially influences the seed's carbon economy. Thus, the major role of embryonic photosynthesis is to produce both energy-rich molecules and oxygen, of which the former can be directly used for biosynthesis. During embryogenesis oxygen production is especially important, in a situation wherein the oxygen is limited within the enclosed seed. As these chloroembryos grow in an environment of a sugar rich endosperm, it requires some adaptive mechanisms in this high osmotic environment. The additional polypeptides found in the thylakoids of chloroembryo chloroplasts in comparison to the thylakoids of leaf chloroplast have been suggested to have a role in protecting the photosynthetic components in the chloroembryos in an environment of high osmotic strength. An attempt to understand osmotic stress tolerance existing in these chloroembryos may lead to a better understanding of tolerance of photosynthesis to osmotic stress.
Kushwaha, B P; Mandal, A; Arora, A L; Kumar, R; Kumar, S; Notter, D R
2009-08-01
Estimates of (co)variance components were obtained for weights at birth, weaning and 6, 9 and 12 months of age in Chokla sheep maintained at the Central Sheep and Wool Research Institute, Avikanagar, Rajasthan, India, over a period of 21 years (1980-2000). Records of 2030 lambs descended from 150 rams and 616 ewes were used in the study. Analyses were carried out by restricted maximum likelihood (REML) fitting an animal model and ignoring or including maternal genetic or permanent environmental effects. Six different animal models were fitted for all traits. The best model was chosen after testing the improvement of the log-likelihood values. Direct heritability estimates were inflated substantially for all traits when maternal effects were ignored. Heritability estimates for weight at birth, weaning and 6, 9 and 12 months of age were 0.20, 0.18, 0.16, 0.22 and 0.23, respectively in the best models. Additive maternal and maternal permanent environmental effects were both significant at birth, accounting for 9% and 12% of phenotypic variance, respectively, but the source of maternal effects (additive versus permanent environmental) at later ages could not be clearly identified. The estimated repeatabilities across years of ewe effects on lamb body weights were 0.26, 0.14, 0.12, 0.13, and 0.15 at birth, weaning, 6, 9 and 12 months of age, respectively. These results indicate that modest rates of genetic progress are possible for all weights. PMID:19630878
Gap-filling methods to impute eddy covariance flux data by preserving variance.
NASA Astrophysics Data System (ADS)
Kunwor, S.; Staudhammer, C. L.; Starr, G.; Loescher, H. W.
2015-12-01
To represent carbon dynamics, in terms of exchange of CO2 between the terrestrial ecosystem and the atmosphere, eddy covariance (EC) data has been collected using eddy flux towers from various sites across globe for more than two decades. However, measurements from EC data are missing for various reasons: precipitation, routine maintenance, or lack of vertical turbulence. In order to have estimates of net ecosystem exchange of carbon dioxide (NEE) with high precision and accuracy, robust gap-filling methods to impute missing data are required. While the methods used so far have provided robust estimates of the mean value of NEE, little attention has been paid to preserving the variance structures embodied by the flux data. Preserving the variance of these data will provide unbiased and precise estimates of NEE over time, which mimic natural fluctuations. We used a non-linear regression approach with moving windows of different lengths (15, 30, and 60-days) to estimate non-linear regression parameters for one year of flux data from a long-leaf pine site at the Joseph Jones Ecological Research Center. We used as our base the Michaelis-Menten and Van't Hoff functions. We assessed the potential physiological drivers of these parameters with linear models using micrometeorological predictors. We then used a parameter prediction approach to refine the non-linear gap-filling equations based on micrometeorological conditions. This provides us an opportunity to incorporate additional variables, such as vapor pressure deficit (VPD) and volumetric water content (VWC) into the equations. Our preliminary results indicate that improvements in gap-filling can be gained with a 30-day moving window with additional micrometeorological predictors (as indicated by lower root mean square error (RMSE) of the predicted values of NEE). Our next steps are to use these parameter predictions from moving windows to gap-fill the data with and without incorporation of potential driver variables
Stochastic Mixing Model with Power Law Decay of Variance
NASA Technical Reports Server (NTRS)
Fedotov, S.; Ihme, M.; Pitsch, H.
2003-01-01
Here we present a simple stochastic mixing model based on the law of large numbers (LLN). The reason why the LLN is involved in our formulation of the mixing problem is that the random conserved scalar c = c(t,x(t)) appears to behave as a sample mean. It converges to the mean value mu, while the variance sigma(sup 2)(sub c) (t) decays approximately as t(exp -1). Since the variance of the scalar decays faster than a sample mean (typically is greater than unity), we will introduce some non-linear modifications into the corresponding pdf-equation. The main idea is to develop a robust model which is independent from restrictive assumptions about the shape of the pdf. The remainder of this paper is organized as follows. In Section 2 we derive the integral equation from a stochastic difference equation describing the evolution of the pdf of a passive scalar in time. The stochastic difference equation introduces an exchange rate gamma(sub n) which we model in a first step as a deterministic function. In a second step, we generalize gamma(sub n) as a stochastic variable taking fluctuations in the inhomogeneous environment into account. In Section 3 we solve the non-linear integral equation numerically and analyze the influence of the different parameters on the decay rate. The paper finishes with a conclusion.
Minimum variance brain source localization for short data sequences.
Ravan, Maryam; Reilly, James P; Hasey, Gary
2014-02-01
In the electroencephalogram (EEG) or magnetoencephalogram (MEG) context, brain source localization methods that rely on estimating second-order statistics often fail when the number of samples of the recorded data sequences is small in comparison to the number of electrodes. This condition is particularly relevant when measuring evoked potentials. Due to the correlated background EEG/MEG signal, an adaptive approach to localization is desirable. Previous work has addressed these issues by reducing the adaptive degrees of freedom (DoFs). This reduction results in decreased resolution and accuracy of the estimated source configuration. This paper develops and tests a new multistage adaptive processing technique based on the minimum variance beamformer for brain source localization that has been previously used in the radar statistical signal processing context. This processing, referred to as the fast fully adaptive (FFA) approach, can significantly reduce the required sample support, while still preserving all available DoFs. To demonstrate the performance of the FFA approach in the limited data scenario, simulation and experimental results are compared with two previous beamforming approaches; i.e., the fully adaptive minimum variance beamforming method and the beamspace beamforming method. Both simulation and experimental results demonstrate that the FFA method can localize all types of brain activity more accurately than the other approaches with limited data.
Concentration variance decay during magma mixing: a volcanic chronometer.
Perugini, Diego; De Campos, Cristina P; Petrelli, Maurizio; Dingwell, Donald B
2015-01-01
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical "mixing to eruption" time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest. PMID:26387555
Variance in saccadic eye movements reflects stable traits.
Meyhöfer, Inga; Bertsch, Katja; Esser, Moritz; Ettinger, Ulrich
2016-04-01
Saccadic tasks are widely used to study cognitive processes, effects of pharmacological treatments, and mechanisms underlying psychiatric disorders. In genetic studies, it is assumed that saccadic endophenotypes are traits. While internal consistency and temporal stability of saccadic performance is high for most of the measures, the magnitude of underlying trait components has not been estimated, and influences of situational aspects and person by situation interactions have not been investigated. To do so, 68 healthy participants performed prosaccades, antisaccades, and memory-guided saccades on three occasions at weekly intervals at the same time of day. Latent state-trait modeling was applied to estimate the proportions of variance reflecting stable trait components, situational influences, and Person × Situation interaction effects. Mean variables for all saccadic tasks showed high to excellent reliabilities. Intraindividual standard deviations were found to be slightly less reliable. Importantly, an average of 60% of variance of a single measurement was explained by trans-situationally stable person effects, while situation aspects and interactions between person and situation were found to play a negligible role. We conclude that saccadic variables, in standard laboratory settings, represent highly reliable measures that are largely unaffected by situational influences. Extending previous reliability studies, these findings clearly demonstrate the trait-like nature of these measures and support their role as endophenotypes.
Cosmic Variance in the Nanohertz Gravitational Wave Background
NASA Astrophysics Data System (ADS)
Roebber, Elinore; Holder, Gilbert; Holz, Daniel E.; Warren, Michael
2016-03-01
We use large N-body simulations and empirical scaling relations between dark matter halos, galaxies, and supermassive black holes (SMBBHs) to estimate the formation rates of SMBBH binaries and the resulting low-frequency stochastic gravitational wave background (GWB). We find this GWB to be relatively insensitive (≲ 10%) to cosmological parameters, with only slight variation between wmap5 and Planck cosmologies. We find that uncertainty in the astrophysical scaling relations changes the amplitude of the GWB by a factor of ∼2. Current observational limits are already constraining this predicted range of models. We investigate the Poisson variance in the amplitude of the GWB for randomly generated populations of SMBBHs, finding a scatter of order unity per frequency bin below 10 nHz, and increasing to a factor of ∼10 near 100 nHz. This variance is a result of the rarity of the most massive binaries, which dominate the signal, and acts as a fundamental uncertainty on the amplitude of the underlying power law spectrum. This Poisson uncertainty dominates at ≳ 20 nHz, while at lower frequencies the dominant uncertainty is related to our poor understanding of the astrophysical scaling relations, although very low frequencies may be dominated by uncertainties related to the final parsec problem and the processes which drive binaries to the gravitational wave dominated regime. Cosmological effects are negligible at all frequencies.
Concentration variance decay during magma mixing: a volcanic chronometer
Perugini, Diego; De Campos, Cristina P.; Petrelli, Maurizio; Dingwell, Donald B.
2015-01-01
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing – a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical “mixing to eruption” time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest. PMID:26387555
Hydraulic geometry of river cross sections; theory of minimum variance
Williams, Garnett P.
1978-01-01
This study deals with the rates at which mean velocity, mean depth, and water-surface width increase with water discharge at a cross section on an alluvial stream. Such relations often follow power laws, the exponents in which are called hydraulic exponents. The Langbein (1964) minimum-variance theory is examined in regard to its validity and its ability to predict observed hydraulic exponents. The variables used with the theory were velocity, depth, width, bed shear stress, friction factor, slope (energy gradient), and stream power. Slope is often constant, in which case only velocity, depth, width, shear and friction factor need be considered. The theory was tested against a wide range of field data from various geographic areas of the United States. The original theory was intended to produce only the average hydraulic exponents for a group of cross sections in a similar type of geologic or hydraulic environment. The theory does predict these average exponents with a reasonable degree of accuracy. An attempt to forecast the exponents at any selected cross section was moderately successful. Empirical equations are more accurate than the minimum variance, Gauckler-Manning, or Chezy methods. Predictions of the exponent of width are most reliable, the exponent of depth fair, and the exponent of mean velocity poor. (Woodard-USGS)
Variance of the Quantum Dwell Time for a Nonrelativistic Particle
NASA Technical Reports Server (NTRS)
Hahne, Gerhard
2012-01-01
Munoz, Seidel, and Muga [Phys. Rev. A 79, 012108 (2009)], following an earlier proposal by Pollak and Miller [Phys. Rev. Lett. 53, 115 (1984)] in the context of a theory of a collinear chemical reaction, showed that suitable moments of a two-flux correlation function could be manipulated to yield expressions for the mean quantum dwell time and mean square quantum dwell time for a structureless particle scattering from a time-independent potential energy field between two parallel lines in a two-dimensional spacetime. The present work proposes a generalization to a charged, nonrelativistic particle scattering from a transient, spatially confined electromagnetic vector potential in four-dimensional spacetime. The geometry of the spacetime domain is that of the slab between a pair of parallel planes, in particular those defined by constant values of the third (z) spatial coordinate. The mean Nth power, N = 1, 2, 3, . . ., of the quantum dwell time in the slab is given by an expression involving an N-flux-correlation function. All these means are shown to be nonnegative. The N = 1 formula reduces to an S-matrix result published previously [G. E. Hahne, J. Phys. A 36, 7149 (2003)]; an explicit formula for N = 2, and of the variance of the dwell time in terms of the S-matrix, is worked out. A formula representing an incommensurability principle between variances of the output-minus-input flux of a pair of dynamical variables (such as the particle s time flux and others) is derived.
Discordance of DNA methylation variance between two accessible human tissues.
Jiang, Ruiwei; Jones, Meaghan J; Chen, Edith; Neumann, Sarah M; Fraser, Hunter B; Miller, Gregory E; Kobor, Michael S
2015-01-01
Population epigenetic studies have been seeking to identify differences in DNA methylation between specific exposures, demographic factors, or diseases in accessible tissues, but relatively little is known about how inter-individual variability differs between these tissues. This study presents an analysis of DNA methylation differences between matched peripheral blood mononuclear cells (PMBCs) and buccal epithelial cells (BECs), the two most accessible tissues for population studies, in 998 promoter-located CpG sites. Specifically we compared probe-wise DNA methylation variance, and how this variance related to demographic factors across the two tissues. PBMCs had overall higher DNA methylation than BECs, and the two tissues tended to differ most at genomic regions of low CpG density. Furthermore, although both tissues showed appreciable probe-wise variability, the specific regions and magnitude of variability differed strongly between tissues. Lastly, through exploratory association analysis, we found indication of differential association of BEC and PBMC with demographic variables. The work presented here offers insight into variability of DNA methylation between individuals and across tissues and helps guide decisions on the suitability of buccal epithelial or peripheral mononuclear cells for the biological questions explored by epigenetic studies in human populations.
Implications and applications of the variance-based uncertainty equalities
NASA Astrophysics Data System (ADS)
Yao, Yao; Xiao, Xing; Wang, Xiaoguang; Sun, C. P.
2015-06-01
In quantum mechanics, the variance-based Heisenberg-type uncertainty relations are a series of mathematical inequalities posing the fundamental limits on the achievable accuracy of the state preparations. In contrast, we construct and formulate two quantum uncertainty equalities, which hold for all pairs of incompatible observables and indicate the new uncertainty relations recently introduced by L. Maccone and A. K. Pati [Phys. Rev. Lett. 113, 260401 (2014), 10.1103/PhysRevLett.113.260401]. In fact, we obtain a series of inequalities with hierarchical structure, including the Maccone-Pati's inequalities as a special (weakest) case. Furthermore, we present an explicit interpretation lying behind the derivations and relate these relations to the so-called intelligent states. As an illustration, we investigate the properties of these uncertainty inequalities in the qubit system and a state-independent bound is obtained for the sum of variances. Finally, we apply these inequalities to the spin squeezing scenario and its implication in interferometric sensitivity is also discussed.
Concentration variance decay during magma mixing: a volcanic chronometer
NASA Astrophysics Data System (ADS)
Perugini, D.; De Campos, C. P.; Petrelli, M.; Dingwell, D. B.
2015-12-01
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical "mixing to eruption" time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest.
Concentration variance decay during magma mixing: a volcanic chronometer
NASA Astrophysics Data System (ADS)
Perugini, Diego; de Campos, Cristina P.; Petrelli, Maurizio; Dingwell, Donald B.
2015-09-01
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical “mixing to eruption” time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest.
What Is Valuable and Unique about the Educational Psychologist?
ERIC Educational Resources Information Center
Ashton, Rebecca; Roberts, Elizabeth
2006-01-01
This paper describes a small-scale piece of research identifying which aspects of the EP role are considered valuable by SENCos and by EPs themselves. In addition, both groups were asked to identify whether they felt these aspects were uniquely offered by EPs or whether other professionals offered similar or identical services. The differences…
FRESIP project observations of cataclysmic variables: A unique opportunity
NASA Technical Reports Server (NTRS)
Howell, Steve B.
1994-01-01
FRESIP Project observations of cataclysmic variables would provide unique data sets. In the study of known cataclysmic variables they would provide extended, well sampled temporal photometric information and in addition, they would provide a large area deep survey; obtaining a complete magnitude limited sample of the galaxy in the volume cone defined by the FRESIP field of view.
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Sig-NganLam, Nina; Quattrochi, Dale A.
2004-01-01
The accuracy of traditional multispectral maximum-likelihood image classification is limited by the skewed statistical distributions of reflectances from the complex heterogenous mixture of land cover types in urban areas. This work examines the utility of local variance, fractal dimension and Moran's I index of spatial autocorrelation in segmenting multispectral satellite imagery. Tools available in the Image Characterization and Modeling System (ICAMS) were used to analyze Landsat 7 imagery of Atlanta, Georgia. Although segmentation of panchromatic images is possible using indicators of spatial complexity, different land covers often yield similar values of these indices. Better results are obtained when a surface of local fractal dimension or spatial autocorrelation is combined as an additional layer in a supervised maximum-likelihood multispectral classification. The addition of fractal dimension measures is particularly effective at resolving land cover classes within urbanized areas, as compared to per-pixel spectral classification techniques.
Unraveling Additive from Nonadditive Effects Using Genomic Relationship Matrices
Muñoz, Patricio R.; Resende, Marcio F. R.; Gezan, Salvador A.; Resende, Marcos Deon Vilela; de los Campos, Gustavo; Kirst, Matias; Huber, Dudley; Peter, Gary F.
2014-01-01
The application of quantitative genetics in plant and animal breeding has largely focused on additive models, which may also capture dominance and epistatic effects. Partitioning genetic variance into its additive and nonadditive components using pedigree-based models (P-genomic best linear unbiased predictor) (P-BLUP) is difficult with most commonly available family structures. However, the availability of dense panels of molecular markers makes possible the use of additive- and dominance-realized genomic relationships for the estimation of variance components and the prediction of genetic values (G-BLUP). We evaluated height data from a multifamily population of the tree species Pinus taeda with a systematic series of models accounting for additive, dominance, and first-order epistatic interactions (additive by additive, dominance by dominance, and additive by dominance), using either pedigree- or marker-based information. We show that, compared with the pedigree, use of realized genomic relationships in marker-based models yields a substantially more precise separation of additive and nonadditive components of genetic variance. We conclude that the marker-based relationship matrices in a model including additive and nonadditive effects performed better, improving breeding value prediction. Moreover, our results suggest that, for tree height in this population, the additive and nonadditive components of genetic variance are similar in magnitude. This novel result improves our current understanding of the genetic control and architecture of a quantitative trait and should be considered when developing breeding strategies. PMID:25324160
Robust Techniques for Testing Heterogeneity of Variance Effects in Factorial Designs.
ERIC Educational Resources Information Center
O'Brien, Ralph G.
1978-01-01
Several ways of using traditional analysis of variance to test the homogeneity of variance in factorial designs with equal or unequal cell sizes are compared using theoretical and Monte Carlo results. (Author/JKS)
Matrix Differencing as a Concise Expression of Test Variance: A Computer Implementation.
ERIC Educational Resources Information Center
Krus, David J.; Wilkinson, Sue Marie
1986-01-01
Matrix differencing of data vectors is introduced as a method for computing test variance and is compared to traditional analysis of variance. Applications for computer assisted instruction, provided by supplemental computer software, are also described. (Author/GDC)
40 CFR 142.301 - What is a small system variance?
Code of Federal Regulations, 2010 CFR
2010-07-01
....301 Section 142.301 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System... issuance of variances from the requirement to comply with a maximum contaminant level or...
Uniqueness of two-loop master contours
NASA Astrophysics Data System (ADS)
Caron-Huot, Simon; Larsen, Kasper J.
2012-10-01
Generalized-unitarity calculations of two-loop amplitudes are performed by expanding the amplitude in a basis of master integrals and then determining the coefficients by taking a number of generalized cuts. In this paper, we present a complete classification of the solutions to the maximal cut of integrals with the double-box topology. The ideas presented here are expected to be relevant for all two-loop topologies as well. We find that these maximal-cut solutions are naturally associated with Riemann surfaces whose topology is determined by the number of states at the vertices of the double-box graph. In the case of four massless external momenta we find that, once the geometry of these Riemann surfaces is properly understood, there are uniquely defined master contours producing the coefficients of the double-box integrals in the basis decomposition of the two-loop amplitude. This is in perfect analogy with the situation in one-loop generalized unitarity. In addition, we point out that the chiral integrals recently introduced by Arkani-Hamed et al. can be used as master integrals for the double-box contributions to the two-loop amplitudes in any gauge theory. The infrared finiteness of these integrals allow for their coefficients as well as their integrated expressions to be evaluated in strictly four dimensions, providing significant technical simplification. We evaluate these integrals at four points and obtain remarkably compact results.
Arachnoiditis ossificans and syringomyelia: A unique presentation
Opalak, Charles F.; Opalak, Michael E.
2015-01-01
Background: Arachnoiditis ossificans (AO) is a rare disorder that was differentiated from leptomeningeal calcification by Kaufman and Dunsmore in 1971. It generally presents with progressive lower extremity myelopathy. Though the underlying etiology has yet to be fully described, it has been associated with various predisposing factors including vascular malformations, previous intradural surgery, myelograms, and adhesive arachnoiditis. Associated conditions include syringomyelia and arachnoid cyst. The preferred diagnostic method is noncontrast computed tomography (CT). Surgical intervention is still controversial and can include decompression and duroplasty or durotomy. Case Description: The authors report the case of a 62-year-old male with a history of paraplegia who presented with a urinary tract infection and dysautonomia. His past surgical history was notable for a C4–C6 anterior fusion and an intrathecal phenol injection for spasticity. A magnetic resonance image (MR) also demonstrated a T6-conus syringx. At surgery, there was significant ossification of the arachnoid/dura, which was removed. After a drain was placed in the syrinx, there was a significant neurologic improvement. Conclusion: This case demonstrates a unique presentation of AO and highlights the need for CT imaging when a noncommunicating syringx is identified. In addition, surgical decompression can achieve good results when AO is associated with concurrent compressive lesions. PMID:26693389
Unique Ganglioside Recognition Strategies for Clostridial Neurotoxins
Benson, Marc A.; Fu, Zhuji; Kim, Jung-Ja P.; Baldwin, Michael R.
2012-03-15
Botulinum neurotoxins (BoNTs) and tetanus neurotoxin are the causative agents of the paralytic diseases botulism and tetanus, respectively. The potency of the clostridial neurotoxins (CNTs) relies primarily on their highly specific binding to nerve terminals and cleavage of SNARE proteins. Although individual CNTs utilize distinct proteins for entry, they share common ganglioside co-receptors. Here, we report the crystal structure of the BoNT/F receptor-binding domain in complex with the sugar moiety of ganglioside GD1a. GD1a binds in a shallow groove formed by the conserved peptide motif E ... H ... SXWY ... G, with additional stabilizing interactions provided by two arginine residues. Comparative analysis of BoNT/F with other CNTs revealed several differences in the interactions of each toxin with ganglioside. Notably, exchange of BoNT/F His-1241 with the corresponding lysine residue of BoNT/E resulted in increased affinity for GD1a and conferred the ability to bind ganglioside GM1a. Conversely, BoNT/E was not able to bind GM1a, demonstrating a discrete mechanism of ganglioside recognition. These findings provide a structural basis for ganglioside binding among the CNTs and show that individual toxins utilize unique ganglioside recognition strategies.
NASA Astrophysics Data System (ADS)
Cheng, Ren-Xiang; Chao, Tao; Xiao-Jun, Liu
2015-11-01
The speed-of-sound variance will decrease the imaging quality of photoacoustic tomography in acoustically inhomogeneous tissue. In this study, ultrasound computed tomography is combined with photoacoustic tomography to enhance the photoacoustic tomography in this situation. The speed-of-sound information is recovered by ultrasound computed tomography. Then, an improved delay-and-sum method is used to reconstruct the image from the photoacoustic signals. The simulation results validate that the proposed method can obtain a better photoacoustic tomography than the conventional method when the speed-of-sound variance is increased. In addition, the influences of the speed-of-sound variance and the fan-angle on the image quality are quantitatively explored to optimize the image scheme. The proposed method has a good performance even when the speed-of-sound variance reaches 14.2%. Furthermore, an optimized fan angle is revealed, which can keep the good image quality with a low cost of hardware. This study has a potential value in extending the biomedical application of photoacoustic tomography. Projection supported by the National Basic Research Program of China (Grant No. 2012CB921504), the National Natural Science Foundation of China (Grant Nos. 11422439, 11274167, and 11274171), and the Specialized Research Fund for the Doctoral Program of Higher Education, China (Grant No. 20120091110001).
Understanding the influence of watershed storage caused by human interferences on ET variance
NASA Astrophysics Data System (ADS)
Zeng, R.; Cai, X.
2014-12-01
Understanding the temporal variance of evapotranspiration (ET) at the watershed scale remains a challenging task, because it is affected by complex climate conditions, soil properties, vegetation, groundwater and human activities. In a changing environment with extensive and intensive human interferences, understanding ET variance and its factors is important for sustainable water resources management. This study presents an analysis of the effect of storage change caused by human activities on ET variance Irrigation usually filters ET variance through the use of surface and groundwater; however, over-amount irrigation may cause the depletion of watershed storage, which changes the coincidence of water availability and energy supply for ET. This study develops a framework by incorporating the water balance and the Budyko Hypothesis. It decomposes the ET variance to the variances of precipitation, potential ET, catchment storage change, and their covariances. The contributions of ET variance from the various components are scaled by some weighting functions, expressed as long-term climate conditions and catchment properties. ET variance is assessed by records from 32 major river basins across the world. It is found that ET variance is dominated by precipitation variance under hot-dry condition and by evaporative demand variance under cool-wet condition; while the coincidence of water and energy supply controls ET variance under moderate climate condition. Watershed storage change plays an increasing important role in determining ET variance with relatively shorter time scale. By incorporating storage change caused by human interferences, this framework corrects the over-estimation of ET variance in hot-dry climate and under-estimation of ET variance in cool-wet climate. Furthermore, classification of dominant factors on ET variance shows similar patterns as geographic zonation.
Analysis of variance of an underdetermined geodetic displacement problem
Darby, D.
1982-06-01
It has been suggested recently that point displacements in a free geodetic network traversing a strike-slip fault may be estimated from repeated surveys by minimizing only those displacement components normal to the strike. It is desirable to justify this procedure. We construct, from estimable quantities, a deformation parameter which is an F-statistic of the type occurring in the analysis of variance of linear models not of full rank. A test of its significance provides the criterion to justify the displacement solution. It is also interesting to study its behaviour as one varies the supposed strike of the fault. Justification of a displacement solution using data from a strike-slip fault is found, but not for data from a rift valley. The technique can be generalized to more complex patterns of deformation such as those expected near the end-zone of a fault in a dislocation model.
A comparison of variance reduction techniques for radar simulation
NASA Astrophysics Data System (ADS)
Divito, A.; Galati, G.; Iovino, D.
Importance sampling and extreme value technique (EVT) and its generalization (G-EVT) were compared as to reduction of the variance of radar simulation estimates. Importance sampling has a greater potential for including a priori information in the simulation experiment, and subsequently to reduce the estimation errors. This feature is paid for by a lack of generality of the simulation procedure. The EVT technique is only valid when a probability tail should be estimated (false alarm problems) and requires, as the only a priori information, that the considered variate belongs to the exponential class. The G-EVT introducing a shape parameter to be estimated (when unknown), allows smaller estimation error to be attained than EVT. The G-EVT and, to a greater extent, the EVT, lead to a straightforward and general simulation procedure for probability tails estimations.
Variance of indoor radon concentration: Major influencing factors.
Yarmoshenko, I; Vasilyev, A; Malinovsky, G; Bossew, P; Žunić, Z S; Onischenko, A; Zhukovsky, M
2016-01-15
Variance of radon concentration in dwelling atmosphere is analysed with regard to geogenic and anthropogenic influencing factors. Analysis includes review of 81 national and regional indoor radon surveys with varying sampling pattern, sample size and duration of measurements and detailed consideration of two regional surveys (Sverdlovsk oblast, Russia and Niška Banja, Serbia). The analysis of the geometric standard deviation revealed that main factors influencing the dispersion of indoor radon concentration over the territory are as follows: area of territory, sample size, characteristics of measurements technique, the radon geogenic potential, building construction characteristics and living habits. As shown for Sverdlovsk oblast and Niška Banja town the dispersion as quantified by GSD is reduced by restricting to certain levels of control factors. Application of the developed approach to characterization of the world population radon exposure is discussed. PMID:26409145
Linear minimum variance filters applied to carrier tracking
NASA Technical Reports Server (NTRS)
Gustafson, D. E.; Speyer, J. L.
1976-01-01
A new approach is taken to the problem of tracking a fixed amplitude signal with a Brownian-motion phase process. Classically, a first-order phase-lock loop (PLL) is used; here, the problem is treated via estimation of the quadrature signal components. In this space, the state dynamics are linear with white multiplicative noise. Therefore, linear minimum-variance filters, which have a particularly simple mechanization, are suggested. The resulting error dynamics are linear at any signal/noise ratio, unlike the classical PLL. During synchronization, and above threshold, this filter with constant gains degrades by 3 per cent in output rms phase error with respect to the classical loop. However, up to 80 per cent of the maximum possible noise improvement is obtained below threshold, where the classical loop is nonoptimum, as demonstrated by a Monte Carlo analysis. Filter mechanizations are presented for both carrier and baseband operation.
The genetic and environmental variance underlying elementary cognitive tasks.
Petrill, S A; Thompson, L A; Detterman, D K
1995-05-01
Although previous studies have examined the genetic and environmental influences upon general intelligence and specific cognitive abilities in school-age children, few studies have examined elementary cognitive tasks in this population. The current study included 149 MZ and 138 same-sex DZ twin pairs who participated in the Western Reserve Twin Project. Thirty measures from the Cognitive Abilities Test (CAT; Detterman, 1986) were studied. Results indicate that (1) these measures are reliable indicators of general intelligence in children and (2) the structure of genetic and environmental influences varies across measures. These results not only indicate that elementary cognitive tasks display heterogeneous genetic and environmental effects, but also may demonstrate that individual differences in biologically based processes are not necessarily due to genetic variance.
Errors in radial velocity variance from Doppler wind lidar
NASA Astrophysics Data System (ADS)
Wang, H.; Barthelmie, R. J.; Doubrawa, P.; Pryor, S. C.
2016-08-01
A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Using both statistically simulated and observed data, this paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, the systematic error is negligible but the random error exceeds about 10 %.
The use of analysis of variance procedures in biological studies
Williams, B.K.
1987-01-01
The analysis of variance (ANOVA) is widely used in biological studies, yet there remains considerable confusion among researchers about the interpretation of hypotheses being tested. Ambiguities arise when statistical designs are unbalanced, and in particular when not all combinations of design factors are represented in the data. This paper clarifies the relationship among hypothesis testing, statistical modelling and computing procedures in ANOVA for unbalanced data. A simple two-factor fixed effects design is used to illustrate three common parametrizations for ANOVA models, and some associations among these parametrizations are developed. Biologically meaningful hypotheses for main effects and interactions are given in terms of each parametrization, and procedures for testing the hypotheses are described. The standard statistical computing procedures in ANOVA are given along with their corresponding hypotheses. Throughout the development unbalanced designs are assumed and attention is given to problems that arise with missing cells.
Correct use of repeated measures analysis of variance.
Park, Eunsik; Cho, Meehye; Ki, Chang-Seok
2009-02-01
In biomedical research, researchers frequently use statistical procedures such as the t-test, standard analysis of variance (ANOVA), or the repeated measures ANOVA to compare means between the groups of interest. There are frequently some misuses in applying these procedures since the conditions of the experiments or statistical assumptions necessary to apply these procedures are not fully taken into consideration. In this paper, we demonstrate the correct use of repeated measures ANOVA to prevent or minimize ethical or scientific problems due to its misuse. We also describe the appropriate use of multiple comparison tests for follow-up analysis in repeated measures ANOVA. Finally, we demonstrate the use of repeated measures ANOVA by using real data and the statistical software package SPSS (SPSS Inc., USA).
Slim completions offer limited stimulation variances: Part 3
Brunsman, B.J. ); Matson, R. ); Shook, R.A. )
1994-12-01
This is the third in a series of five articles addressing barriers to increased US utilization of slimhole drilling and completion techniques. Previous articles discussed slimhole drilling and cementing. The focus of this article is stimulation, with an emphasis on hydraulic fracturing. This series is based on a study conducted for Gas Research institute (GRI) by an industry team consisting of Maurer Engineering, BJ Services, Baker Oil tools, and Halliburton. Parts 1 and 2 were published in the September and October 1994 issues of Petroleum Engineer International, respectively. Potential cost saving resulting from slimhole drilling and completions of gas wells are often inhibited by the limitations on hydraulic fracturing. Variances from conventional fracturing include excessive friction pressure, fracture fluid degradation due to excessive shear rates, proppant bridging and limited diverting options.
Estimation of measurement variance in the context of environment statistics
NASA Astrophysics Data System (ADS)
Maiti, Pulakesh
2015-02-01
The object of environment statistics is for providing information on the environment, on its most important changes over time, across locations and identifying the main factors that influence them. Ultimately environment statistics would be required to produce higher quality statistical information. For this timely, reliable and comparable data are needed. Lack of proper and uniform definitions, unambiguous classifications pose serious problems to procure qualitative data. These cause measurement errors. We consider the problem of estimating measurement variance so that some measures may be adopted to improve upon the quality of data on environmental goods and services and on value statement in economic terms. The measurement technique considered here is that of employing personal interviewers and the sampling considered here is that of two-stage sampling.
INTERPRETING MAGNETIC VARIANCE ANISOTROPY MEASUREMENTS IN THE SOLAR WIND
TenBarge, J. M.; Klein, K. G.; Howes, G. G.; Podesta, J. J.
2012-07-10
The magnetic variance anisotropy (A{sub m}) of the solar wind has been used widely as a method to identify the nature of solar wind turbulent fluctuations; however, a thorough discussion of the meaning and interpretation of the A{sub m} has not appeared in the literature. This paper explores the implications and limitations of using the A{sub m} as a method for constraining the solar wind fluctuation mode composition and presents a more informative method for interpreting spacecraft data. The paper also compares predictions of the A{sub m} from linear theory to nonlinear turbulence simulations and solar wind measurements. In both cases, linear theory compares well and suggests that the solar wind for the interval studied is dominantly Alfvenic in the inertial and dissipation ranges to scales of k{rho}{sub i} {approx_equal} 5.
Low variance at large scales of WMAP 9 year data
Gruppuso, A.; Finelli, F.; Rosa, A. De; Mandolesi, N.; Natoli, P.; Paci, F.; Molinari, D. E-mail: natoli@fe.infn.it E-mail: finelli@iasfbo.inaf.it E-mail: derosa@iasfbo.inaf.it
2013-07-01
We use an optimal estimator to study the variance of the WMAP 9 CMB field at low resolution, in both temperature and polarization. Employing realistic Monte Carlo simulation, we find statistically significant deviations from the ΛCDM model in several sky cuts for the temperature field. For the considered masks in this analysis, which cover at least the 54% of the sky, the WMAP 9 CMB sky and ΛCDM are incompatible at ≥ 99.94% C.L. at large angles ( > 5°). We find instead no anomaly in polarization. As a byproduct of our analysis, we present new, optimal estimates of the WMAP 9 CMB angular power spectra from the WMAP 9 year data at low resolution.
Variance estimation for the Federal Waterfowl Harvest Surveys
Geissler, P.H.
1988-01-01
The Federal Waterfowl Harvest Surveys provide estimates of waterfowl harvest by species for flyways and states, harvests of most other migratory game bird species (by waterfowl hunters), crippling losses for ducks, geese, and coots, days hunted, and bag per hunter. The Waterfowl Hunter Questionnaire Survey separately estimates the harvest of ducks and geese using cluster samples of hunters who buy duck stamps at sample post offices. The Waterfowl Parts Collection estimates species, age, and sex ratios from parts solicited from successful hunters who responded to the Waterfowl Hunter Questionnaire Survey in previous years. These ratios are used to partition the duck and goose harvest into species, age, and sex specific harvest estimates. Annual estimates are correlated because successful hunters who respond to the Questionnaire Survey in one year may be asked to contribute to the Parts Collection for the next three years. Bootstrap variance estimates are used because covariances among years are difficult to estimate.
From means and variances to persons and patterns
Grice, James W.
2015-01-01
A novel approach for conceptualizing and analyzing data from psychological studies is presented and discussed. This approach is centered on model building in an effort to explicate the structures and processes believed to generate a set of observations. These models therefore go beyond the variable-based, path models in use today which are limiting with regard to the types of inferences psychologists can draw from their research. In terms of analysis, the newer approach replaces traditional aggregate statistics such as means, variances, and covariances with methods of pattern detection and analysis. While these methods are person-centered and do not require parametric assumptions, they are both demanding and rigorous. They also provide psychologists with the information needed to draw the primary inference they often wish to make from their research; namely, the inference to best explanation. PMID:26257672
Ant Colony Optimization for Markowitz Mean-Variance Portfolio Model
NASA Astrophysics Data System (ADS)
Deng, Guang-Feng; Lin, Woo-Tsong
This work presents Ant Colony Optimization (ACO), which was initially developed to be a meta-heuristic for combinatorial optimization, for solving the cardinality constraints Markowitz mean-variance portfolio model (nonlinear mixed quadratic programming problem). To our knowledge, an efficient algorithmic solution for this problem has not been proposed until now. Using heuristic algorithms in this case is imperative. Numerical solutions are obtained for five analyses of weekly price data for the following indices for the period March, 1992 to September, 1997: Hang Seng 31 in Hong Kong, DAX 100 in Germany, FTSE 100 in UK, S&P 100 in USA and Nikkei 225 in Japan. The test results indicate that the ACO is much more robust and effective than Particle swarm optimization (PSO), especially for low-risk investment portfolios.
Variance of indoor radon concentration: Major influencing factors.
Yarmoshenko, I; Vasilyev, A; Malinovsky, G; Bossew, P; Žunić, Z S; Onischenko, A; Zhukovsky, M
2016-01-15
Variance of radon concentration in dwelling atmosphere is analysed with regard to geogenic and anthropogenic influencing factors. Analysis includes review of 81 national and regional indoor radon surveys with varying sampling pattern, sample size and duration of measurements and detailed consideration of two regional surveys (Sverdlovsk oblast, Russia and Niška Banja, Serbia). The analysis of the geometric standard deviation revealed that main factors influencing the dispersion of indoor radon concentration over the territory are as follows: area of territory, sample size, characteristics of measurements technique, the radon geogenic potential, building construction characteristics and living habits. As shown for Sverdlovsk oblast and Niška Banja town the dispersion as quantified by GSD is reduced by restricting to certain levels of control factors. Application of the developed approach to characterization of the world population radon exposure is discussed.
A method for the microlensed flux variance of QSOs
NASA Astrophysics Data System (ADS)
Goodman, Jeremy; Sun, Ai-Lei
2014-06-01
A fast and practical method is described for calculating the microlensed flux variance of an arbitrary source by uncorrelated stars. The required inputs are the mean convergence and shear due to the smoothed potential of the lensing galaxy, the stellar mass function, and the absolute square of the Fourier transform of the surface brightness in the source plane. The mathematical approach follows previous authors but has been generalized, streamlined, and implemented in publicly available code. Examples of its application are given for Dexter and Agol's inhomogeneous-disc models as well as the usual Gaussian sources. Since the quantity calculated is a second moment of the magnification, it is only logarithmically sensitive to the sizes of very compact sources. However, for the inferred sizes of actual quasi-stellar objects (QSOs), it has some discriminatory power and may lend itself to simple statistical tests. At the very least, it should be useful for testing the convergence of microlensing simulations.
The dynamic Allan variance II: a fast computational algorithm.
Galleani, Lorenzo
2010-01-01
The stability of an atomic clock can change with time due to several factors, such as temperature, humidity, radiations, aging, and sudden breakdowns. The dynamic Allan variance, or DAVAR, is a representation of the time-varying stability of an atomic clock, and it can be used to monitor the clock behavior. Unfortunately, the computational time of the DAVAR grows very quickly with the length of the analyzed time series. In this article, we present a fast algorithm for the computation of the DAVAR, and we also extend it to the case of missing data. Numerical simulations show that the fast algorithm dramatically reduces the computational time. The fast algorithm is useful when the analyzed time series is long, or when many clocks must be monitored, or when the computational power is low, as happens onboard satellites and space probes.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-15
... Variance of License Article 403 and Soliciting Comments, Motions to Intervene and Protests Take notice that... inspection: a. Application Type: Extension of temporary variance of license article 403. b. Project No: 12514... Commission to grant an extension of time to a temporary variance of license Article 403 that was granted...
Analysis of Variance of Migmatite Composition II: Comparison of Two Areas.
Ward, R F; Werner, S L
1964-03-01
To obtain comparison with previous results an analysis of variance was made on measurements of proportion of granite and country rock in a second Colorado migmatite. The distributional parameters (mean and variance) of both regions are similar, but the distributions of variance among the three levels of the nested design differ radically.
40 CFR 142.302 - Who can issue a small system variance?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Who can issue a small system variance? 142.302 Section 142.302 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... General Provisions § 142.302 Who can issue a small system variance? A small system variance under...
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2011-01-01
Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…
Modeling Heterogeneous Variance-Covariance Components in Two-Level Models
ERIC Educational Resources Information Center
Leckie, George; French, Robert; Charlton, Chris; Browne, William
2014-01-01
Applications of multilevel models to continuous outcomes nearly always assume constant residual variance and constant random effects variances and covariances. However, modeling heterogeneity of variance can prove a useful indicator of model misspecification, and in some educational and behavioral studies, it may even be of direct substantive…
Water vapor variance measurements using a Raman lidar
NASA Technical Reports Server (NTRS)
Evans, K.; Melfi, S. H.; Ferrare, R.; Whiteman, D.
1992-01-01
Because of the importance of atmospheric water vapor variance, we have analyzed data from the NASA/Goddard Raman lidar to obtain temporal scales of water vapor mixing ratio as a function of altitude over observation periods extending to 12 hours. The ground-based lidar measures water vapor mixing ration from near the earth's surface to an altitude of 9-10 km. Moisture profiles are acquired once every minute with 75 m vertical resolution. Data at each 75 meter altitude level can be displayed as a function of time from the beginning to the end of an observation period. These time sequences have been spectrally analyzed using a fast Fourier transform technique. An example of such a temporal spectrum obtained between 00:22 and 10:29 UT on December 6, 1991 is shown in the figure. The curve shown on the figure represents the spectral average of data from 11 height levels centered on an altitude of 1 km (1 plus or minus .375 km). The spectra shows a decrease in energy density with frequency which generally follows a -5/3 power law over the spectral interval 3x10 (exp -5) to 4x10 (exp -3) Hz. The flattening of the spectrum for frequencies greater than 6x10 (exp -3) Hz is most likely a measure of instrumental noise. Spectra like that shown in the figure are calculated for other altitudes and show changes in spectral features with height. Spectral analysis versus height have been performed for several observation periods which demonstrate changes in water vapor mixing ratio spectral character from one observation period to the next. The combination of these temporal spectra with independent measurements of winds aloft provide an opportunity to infer spatial scales of moisture variance.
A variance-decomposition approach to investigating multiscale habitat associations
Lawler, J.J.; Edwards, T.C., Jr.
2006-01-01
The recognition of the importance of spatial scale in ecology has led many researchers to take multiscale approaches to studying habitat associations. However, few of the studies that investigate habitat associations at multiple spatial scales have considered the potential effects of cross-scale correlations in measured habitat variables. When cross-scale correlations in such studies are strong, conclusions drawn about the relative strength of habitat associations at different spatial scales may be inaccurate. Here we adapt and demonstrate an analytical technique based on variance decomposition for quantifying the influence of cross-scale correlations on multiscale habitat associations. We used the technique to quantify the variation in nest-site locations of Red-naped Sapsuckers (Sphyrapicus nuchalis) and Northern Flickers (Colaptes auratus) associated with habitat descriptors at three spatial scales. We demonstrate how the method can be used to identify components of variation that are associated only with factors at a single spatial scale as well as shared components of variation that represent cross-scale correlations. Despite the fact that no explanatory variables in our models were highly correlated (r < 0.60), we found that shared components of variation reflecting cross-scale correlations accounted for roughly half of the deviance explained by the models. These results highlight the importance of both conducting habitat analyses at multiple spatial scales and of quantifying the effects of cross-scale correlations in such analyses. Given the limits of conventional analytical techniques, we recommend alternative methods, such as the variance-decomposition technique demonstrated here, for analyzing habitat associations at multiple spatial scales. ?? The Cooper Ornithological Society 2006.
Hodological Resonance, Hodological Variance, Psychosis, and Schizophrenia: A Hypothetical Model
Birkett, Paul Brian Lawrie
2011-01-01
Schizophrenia is a disorder with a large number of clinical, neurobiological, and cognitive manifestations, none of which is invariably present. However it appears to be a single nosological entity. This article considers the likely characteristics of a pathology capable of such diverse consequences. It is argued that both deficit and psychotic symptoms can be manifestations of a single pathology. A general model of psychosis is proposed in which the informational sensitivity or responsivity of a network (“hodological resonance”) becomes so high that it activates spontaneously, to produce a hallucination, if it is in sensory cortex, or another psychotic symptom if it is elsewhere. It is argued that this can come about because of high levels of modulation such as those assumed present in affective psychosis, or because of high levels of baseline resonance, such as those expected in deafferentation syndromes associated with hallucinations, for example, Charles Bonnet. It is further proposed that schizophrenia results from a process (probably neurodevelopmental) causing widespread increases of variance in baseline resonance; consequently some networks possess high baseline resonance and become susceptible to spontaneous activation. Deficit symptoms might result from the presence of networks with increased activation thresholds. This hodological variance model is explored in terms of schizo-affective disorder, transient psychotic symptoms, diathesis-stress models, mechanisms of antipsychotic pharmacotherapy and persistence of genes predisposing to schizophrenia. Predictions and implications of the model are discussed. In particular it suggests a need for more research into psychotic states and for more single case-based studies in schizophrenia. PMID:21811475
The Mobius Effect: Addressing Learner Variance in Schools
ERIC Educational Resources Information Center
Tomlinson, Carol Ann
2004-01-01
Currently, educators separate out from typical students those whose learning needs vary from the norm. The norming and sorting process may earmark students as "different" without providing markedly unique instruction and without producing robust academic outcomes. An alternative to fragmentation for some students is the creation of classrooms in…
NASA Astrophysics Data System (ADS)
Maginnis, P. A.; West, M.; Dullerud, G. E.
2016-10-01
We propose an algorithm to accelerate Monte Carlo simulation for a broad class of stochastic processes. Specifically, the class of countable-state, discrete-time Markov chains driven by additive Poisson noise, or lattice discrete-time Markov chains. In particular, this class includes simulation of reaction networks via the tau-leaping algorithm. To produce the speedup, we simulate pairs of fair-draw trajectories that are negatively correlated. Thus, when averaged, these paths produce an unbiased Monte Carlo estimator that has reduced variance and, therefore, reduced error. Numerical results for three example systems included in this work demonstrate two to four orders of magnitude reduction of mean-square error. The numerical examples were chosen to illustrate different application areas and levels of system complexity. The areas are: gene expression (affine state-dependent rates), aerosol particle coagulation with emission and human immunodeficiency virus infection (both with nonlinear state-dependent rates). Our algorithm views the system dynamics as a "black-box", i.e., we only require control of pseudorandom number generator inputs. As a result, typical codes can be retrofitted with our algorithm using only minor changes. We prove several analytical results. Among these, we characterize the relationship of covariances between paths in the general nonlinear state-dependent intensity rates case, and we prove variance reduction of mean estimators in the special case of affine intensity rates.
Variance reduction for Fokker–Planck based particle Monte Carlo schemes
Gorji, M. Hossein Andric, Nemanja; Jenny, Patrick
2015-08-15
Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied. Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.
Hether, Tyler D; Hohenlohe, Paul A
2014-04-01
Systems biology is accumulating a wealth of understanding about the structure of genetic regulatory networks, leading to a more complete picture of the complex genotype-phenotype relationship. However, models of multivariate phenotypic evolution based on quantitative genetics have largely not incorporated a network-based view of genetic variation. Here we model a set of two-node, two-phenotype genetic network motifs, covering a full range of regulatory interactions. We find that network interactions result in different patterns of mutational (co)variance at the phenotypic level (the M-matrix), not only across network motifs but also across phenotypic space within single motifs. This effect is due almost entirely to mutational input of additive genetic (co)variance. Variation in M has the effect of stretching and bending phenotypic space with respect to evolvability, analogous to the curvature of space-time under general relativity, and similar mathematical tools may apply in each case. We explored the consequences of curvature in mutational variation by simulating adaptation under divergent selection with gene flow. Both standing genetic variation (the G-matrix) and rate of adaptation are constrained by M, so that G and adaptive trajectories are curved across phenotypic space. Under weak selection the phenotypic mean at migration-selection balance also depends on M. PMID:24219635
Waste Isolation Pilot Plant no-migration variance petition. Executive summary
Not Available
1990-12-31
Section 3004 of RCRA allows EPA to grant a variance from the land disposal restrictions when a demonstration can be made that, to a reasonable degree of certainty, there will be no migration of hazardous constituents from the disposal unit for as long as the waste remains hazardous. Specific requirements for making this demonstration are found in 40 CFR 268.6, and EPA has published a draft guidance document to assist petitioners in preparing a variance request. Throughout the course of preparing this petition, technical staff from DOE, EPA, and their contractors have met frequently to discuss and attempt to resolve issues specific to radioactive mixed waste and the WIPP facility. The DOE believes it meets or exceeds all requirements set forth for making a successful ``no-migration`` demonstration. The petition presents information under five general headings: (1) waste information; (2) site characterization; (3) facility information; (4) assessment of environmental impacts, including the results of waste mobility modeling; and (5) analysis of uncertainties. Additional background and supporting documentation is contained in the 15 appendices to the petition, as well as in an extensive addendum published in October 1989.
Variance reduction for Fokker-Planck based particle Monte Carlo schemes
NASA Astrophysics Data System (ADS)
Gorji, M. Hossein; Andric, Nemanja; Jenny, Patrick
2015-08-01
Recently, Fokker-Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1-3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker-Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker-Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied. Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.
Constructing Dense Graphs with Unique Hamiltonian Cycles
ERIC Educational Resources Information Center
Lynch, Mark A. M.
2012-01-01
It is not difficult to construct dense graphs containing Hamiltonian cycles, but it is difficult to generate dense graphs that are guaranteed to contain a unique Hamiltonian cycle. This article presents an algorithm for generating arbitrarily large simple graphs containing "unique" Hamiltonian cycles. These graphs can be turned into dense graphs…
Unique Challenges Testing SDRs for Space
NASA Technical Reports Server (NTRS)
Johnson, Sandra; Chelmins, David; Downey, Joseph; Nappier, Jennifer
2013-01-01
This paper describes the approach used by the Space Communication and Navigation (SCaN) Testbed team to qualify three Software Defined Radios (SDR) for operation in space and the characterization of the platform to enable upgrades on-orbit. The three SDRs represent a significant portion of the new technologies being studied on board the SCAN Testbed, which is operating on an external truss on the International Space Station (ISS). The SCaN Testbed provides experimenters an opportunity to develop and demonstrate experimental waveforms and applications for communication, networking, and navigation concepts and advance the understanding of developing and operating SDRs in space. Qualifying a Software Defined Radio for the space environment requires additional consideration versus a hardware radio. Tests that incorporate characterization of the platform to provide information necessary for future waveforms, which might exercise extended capabilities of the hardware, are needed. The development life cycle for the radio follows the software development life cycle, where changes can be incorporated at various stages of development and test. It also enables flexibility to be added with minor additional effort. Although this provides tremendous advantages, managing the complexity inherent in a software implementation requires a testing beyond the traditional hardware radio test plan. Due to schedule and resource limitations and parallel development activities, the subsystem testing of the SDRs at the vendor sites was primarily limited to typical fixed transceiver type of testing. NASA's Glenn Research Center (GRC) was responsible for the integration and testing of the SDRs into the SCaN Testbed system and conducting the investigation of the SDR to advance the technology to be accepted by missions. This paper will describe the unique tests that were conducted at both the subsystem and system level, including environmental testing, and present results. For example, test
Unique Challenges Testing SDRs for Space
NASA Technical Reports Server (NTRS)
Chelmins, David; Downey, Joseph A.; Johnson, Sandra K.; Nappier, Jennifer M.
2013-01-01
This paper describes the approach used by the Space Communication and Navigation (SCaN) Testbed team to qualify three Software Defined Radios (SDR) for operation in space and the characterization of the platform to enable upgrades on-orbit. The three SDRs represent a significant portion of the new technologies being studied on board the SCAN Testbed, which is operating on an external truss on the International Space Station (ISS). The SCaN Testbed provides experimenters an opportunity to develop and demonstrate experimental waveforms and applications for communication, networking, and navigation concepts and advance the understanding of developing and operating SDRs in space. Qualifying a Software Defined Radio for the space environment requires additional consideration versus a hardware radio. Tests that incorporate characterization of the platform to provide information necessary for future waveforms, which might exercise extended capabilities of the hardware, are needed. The development life cycle for the radio follows the software development life cycle, where changes can be incorporated at various stages of development and test. It also enables flexibility to be added with minor additional effort. Although this provides tremendous advantages, managing the complexity inherent in a software implementation requires a testing beyond the traditional hardware radio test plan. Due to schedule and resource limitations and parallel development activities, the subsystem testing of the SDRs at the vendor sites was primarily limited to typical fixed transceiver type of testing. NASA s Glenn Research Center (GRC) was responsible for the integration and testing of the SDRs into the SCaN Testbed system and conducting the investigation of the SDR to advance the technology to be accepted by missions. This paper will describe the unique tests that were conducted at both the subsystem and system level, including environmental testing, and present results. For example, test
Unique antitumor property of the Mg-Ca-Sr alloys with addition of Zn
Wu, Yuanhao; He, Guanping; Zhang, Yu; Liu, Yang; Li, Mei; Wang, Xiaolan; Li, Nan; Li, Kang; Zheng, Guan; Zheng, Yufeng; Yin, Qingshui
2016-01-01
In clinical practice, tumor recurrence and metastasis after orthopedic prosthesis implantation is an intensely troublesome matter. Therefore, to develop implant materials with antitumor property is extremely necessary and meaningful. Magnesium (Mg) alloys possess superb biocompatibility, mechanical property and biodegradability in orthopedic applications. However, whether they possess antitumor property had seldom been reported. In recent years, it showed that zinc (Zn) not only promote the osteogenic activity but also exhibit good antitumor property. In our present study, Zn was selected as an alloying element for the Mg-1Ca-0.5Sr alloy to develop a multifunctional material with antitumor property. We investigated the influence of the Mg-1Ca-0.5Sr-xZn (x = 0, 2, 4, 6 wt%) alloys extracts on the proliferation rate, cell apoptosis, migration and invasion of the U2OS cell line. Our results show that Zn containing Mg alloys extracts inhibit the cell proliferation by alteration the cell cycle and inducing cell apoptosis via the activation of the mitochondria pathway. The cell migration and invasion property were also suppressed by the activation of MAPK (mitogen-activated protein kinase) pathway. Our work suggests that the Mg-1Ca-0.5Sr-6Zn alloy is expected to be a promising orthopedic implant in osteosarcoma limb-salvage surgery for avoiding tumor recurrence and metastasis. PMID:26907515
Unique antitumor property of the Mg-Ca-Sr alloys with addition of Zn
NASA Astrophysics Data System (ADS)
Wu, Yuanhao; He, Guanping; Zhang, Yu; Liu, Yang; Li, Mei; Wang, Xiaolan; Li, Nan; Li, Kang; Zheng, Guan; Zheng, Yufeng; Yin, Qingshui
2016-02-01
In clinical practice, tumor recurrence and metastasis after orthopedic prosthesis implantation is an intensely troublesome matter. Therefore, to develop implant materials with antitumor property is extremely necessary and meaningful. Magnesium (Mg) alloys possess superb biocompatibility, mechanical property and biodegradability in orthopedic applications. However, whether they possess antitumor property had seldom been reported. In recent years, it showed that zinc (Zn) not only promote the osteogenic activity but also exhibit good antitumor property. In our present study, Zn was selected as an alloying element for the Mg-1Ca-0.5Sr alloy to develop a multifunctional material with antitumor property. We investigated the influence of the Mg-1Ca-0.5Sr-xZn (x = 0, 2, 4, 6 wt%) alloys extracts on the proliferation rate, cell apoptosis, migration and invasion of the U2OS cell line. Our results show that Zn containing Mg alloys extracts inhibit the cell proliferation by alteration the cell cycle and inducing cell apoptosis via the activation of the mitochondria pathway. The cell migration and invasion property were also suppressed by the activation of MAPK (mitogen-activated protein kinase) pathway. Our work suggests that the Mg-1Ca-0.5Sr-6Zn alloy is expected to be a promising orthopedic implant in osteosarcoma limb-salvage surgery for avoiding tumor recurrence and metastasis.
Mara Field, a unique giant in Venezuela
Young, G.A. )
1993-02-01
The Mara field is located in Venzuela, 45 km northwest of Maracaibo, on the Mara-La Paz anticlinal trend. Discovered in 1945 by the Caribbean Petroleum Co. (Shell group), the field has produced 407 MMB as of 1991 and has remaining proven reserves of 60 MMB, and probable and possible reserves of 58 MMB, for an ultimate potential recovery of 525 MMB. In addition to being a giant field, Mara is also unique in that it produces from fractures igneous basement rocks as well as from fractured Cretaceous limestones, which are the source rocks of the region, and Paleocene/Eocene sandstone and carbonate reservoirs. The sedimentary stratigraphic section comprises beds ranging from early Cretaceous to middle Eocene, which suffered considerable erosion, and overlying Plio-Pleistocene sediments, all of which were involved in the latest strong deformation. The structure of the field is complex; a main thrust zone (consisting of numerous individual faults) borders the northwest flank of the elongated anticline and an opposing minor thrust zone cuts the southeast flank, forming a thrusted horst. Oblique transverse faults also cut the structure. By studying the patterns of cumulative production and lost circulation, it was possible to derive relationships between the accumulations of oil and the faulting and conceptual patterns of related fracturing in the different types of reservoir rocks. The study indicates that one can prognosticate the more prospective drilling locations on this or similar structures involving basement and limestone. It is felt that this information may be applicable to other plays in other regions.
On the uniqueness of Kerr-Newman black holes
NASA Astrophysics Data System (ADS)
Wong, Willie Wai-Yeung
The uniqueness of the Kerr-Newman family of black hole metrics as stationary asymptotically flat solutions to the Einstein equations coupled to a free Maxwell field is a crucial ingredient in the study of final states of the universe in general relativity. If one imposes the additional requirement that the space-time is axial-symmetric, then said uniqueness was shown by the works of B. Carter, D.C. Robinson, G.L. Bunting, and P.O. Mazur during the 1970s and 80s. In the real-analytic category, the condition of axial symmetry can be removed through S. Hawking's Rigidity Theorem. The necessary construction used in Hawking's proof, however, breaks down in the smooth category as it requires solving an ill-posed hyperbolic partial differential equation. The uniqueness problem of Kerr-Newman metrics in the smooth category is considered here following the program initiated by A. Ionescu and S. Klainerman for uniqueness of the Kerr metrics among solutions to the Einstein vacuum equations. In this work, a space-time, tensorial characterization of the Kerr-Newman solutions is obtained, generalizing an earlier work of M. Mars. The characterization tensors are shown to obey hyperbolic partial differential equations. Using the general Carleman inequality of Ionescu and Klainerman, the uniqueness of Kerr-Newman metrics is proven, conditional on a rigidity assumption on the bifurcate event horizon.
NASA Astrophysics Data System (ADS)
Asanuma, Jun
Variances of the velocity components and scalars are important as indicators of the turbulence intensity. They also can be utilized to estimate surface fluxes in several types of "variance methods", and the estimated fluxes can be regional values if the variances from which they are calculated are regionally representative measurements. On these motivations, variances measured by an aircraft in the unstable ABL over a flat pine forest during HAPEX-Mobilhy were analyzed within the context of the similarity scaling arguments. The variances of temperature and vertical velocity within the atmospheric surface layer were found to follow closely the Monin-Obukhov similarity theory, and to yield reasonable estimates of the surface sensible heat fluxes when they are used in variance methods. This gives a validation to the variance methods with aircraft measurements. On the other hand, the specific humidity variances were influenced by the surface heterogeneity and clearly fail to obey MOS. A simple analysis based on the similarity law for free convection produced a comprehensible and quantitative picture regarding the effect of the surface flux heterogeneity on the statistical moments, and revealed that variances of the active and passive scalars become dissimilar because of their different roles in turbulence. The analysis also indicated that the mean quantities are also affected by the heterogeneity but to a less extent than the variances. The temperature variances in the mixed layer (ML) were examined by using a generalized top-down bottom-up diffusion model with some combinations of velocity scales and inversion flux models. The results showed that the surface shear stress exerts considerable influence on the lower ML. Also with the temperature and vertical velocity variances ML variance methods were tested, and their feasibility was investigated. Finally, the variances in the ML were analyzed in terms of the local similarity concept; the results confirmed the original
Genetic Variance in Processing Speed Drives Variation in Aging of Spatial and Memory Abilities
Finkel, Deborah; McArdle, John J.; Reynolds, Chandra A.; Hamagami, Fumiaki; Pedersen, Nancy L.
2013-01-01
Previous analyses have identified a genetic contribution to the correlation between declines with age in processing speed and higher cognitive abilities. The goal of the current analysis was to apply the biometric dual change score model to consider the possibility of temporal dynamics underlying the genetic covariance between aging trajectories for processing speed and cognitive abilities. Longitudinal twin data from the Swedish Adoption/Twin Study of Aging, including up to 5 measurement occasions covering a 16-year period, were available from 806 participants ranging in age from 50 to 88 years at the 1st measurement wave. Factors were generated to tap 4 cognitive domains: verbal ability, spatial ability, memory, and processing speed. Model-fitting indicated that genetic variance for processing speed was a leading indicator of variation in age changes for spatial and memory ability, providing additional support for processing speed theories of cognitive aging. PMID:19413434
Lubke, Gitta H; Hottenga, Jouke Jan; Walters, Raymond; Laurin, Charles; de Geus, Eco J C; Willemsen, Gonneke; Smit, Jan H; Middeldorp, Christel M; Penninx, Brenda W J H; Vink, Jacqueline M; Boomsma, Dorret I
2012-10-15
Genome-wide association studies of psychiatric disorders have been criticized for their lack of explaining a considerable proportion of the heritability established in twin and family studies. Genome-wide association studies of major depressive disorder in particular have so far been unsuccessful in detecting genome-wide significant single nucleotide polymorphisms (SNPs). Using two recently proposed methods designed to estimate the heritability of a phenotype that is attributable to genome-wide SNPs, we show that SNPs on current platforms contain substantial information concerning the additive genetic variance of major depressive disorder. To assess the consistency of these two methods, we analyzed four other complex phenotypes from different domains. The pattern of results is consistent with estimates of heritability obtained in twin studies carried out in the same population.
NASA Astrophysics Data System (ADS)
Li, Min-Yang; Yang, Mingchia; Vargas, Emily; Neff, Kyle; Vanli, Arda; Liang, Richard
2016-09-01
One of the major challenges towards controlling the transfer of electrical and mechanical properties of nanotubes into nanocomposites is the lack of adequate measurement systems to quantify the variations in bulk properties while the nanotubes were used as the reinforcement material. In this study, we conducted one-way analysis of variance (ANOVA) on thickness and conductivity measurements. By analyzing the data collected from both experienced and inexperienced operators, we found some operation details users might overlook that resulted in variations, since conductivity measurements of CNT thin films are very sensitive to thickness measurements. In addition, we demonstrated how issues in measurements damaged samples and limited the number of replications resulting in large variations in the electrical conductivity measurement results. Based on this study, we proposed a faster, more reliable approach to measure the thickness of CNT thin films that operators can follow to make these measurement processes less dependent on operator skills.
Image noise variance estimation using the mixed Lagrange time-delay autoregressive model.
Sim, K-S; Tso, C-P; Law, K-K
2008-04-01
The mixed Lagrange time-delay estimation autoregressive (MLTDEAR) model is proposed as a solution to estimate image noise variance. The only information available to the proposed estimator is a corrupted image and the nature of additive white noise. The image autocorrelation function is calculated and used to obtain the MLTDEAR model coefficients; the relationship between the MLTDEAR and linear prediction models is utilized to estimate the model coefficients. The forward-backward prediction is then used to obtain the predictor coefficients; the MLTDEAR model coefficients and prior samples of zero-offset autocorrelation values are next used to predict the power of the noise-free image. Furthermore, the fundamental performance limit of the signal and noise estimation, as derived from the Cramer-Rao inequality, is presented.
Onions: a source of unique dietary flavonoids.
Slimestad, Rune; Fossen, Torgils; Vågen, Ingunn Molund
2007-12-12
Onion bulbs (Allium cepa L.) are among the richest sources of dietary flavonoids and contribute to a large extent to the overall intake of flavonoids. This review includes a compilation of the existing qualitative and quantitative information about flavonoids reported to occur in onion bulbs, including NMR spectroscopic evidence used for structural characterization. In addition, a summary is given to index onion cultivars according to their content of flavonoids measured as quercetin. Only compounds belonging to the flavonols, the anthocyanins, and the dihydroflavonols have been reported to occur in onion bulbs. Yellow onions contain 270-1187 mg of flavonols per kilogram of fresh weight (FW), whereas red onions contain 415-1917 mg of flavonols per kilogram of FW. Flavonols are the predominant pigments of onions. At least 25 different flavonols have been characterized, and quercetin derivatives are the most important ones in all onion cultivars. Their glycosyl moieties are almost exclusively glucose, which is mainly attached to the 4', 3, and/or 7-positions of the aglycones. Quercetin 4'-glucoside and quercetin 3,4'-diglucoside are in most cases reported as the main flavonols in recent literature. Analogous derivatives of kaempferol and isorhamnetin have been identified as minor pigments. Recent reports indicate that the outer dry layers of onion bulbs contain oligomeric structures of quercetin in addition to condensation products of quercetin and protocatechuic acid. The anthocyanins of red onions are mainly cyanidin glucosides acylated with malonic acid or nonacylated. Some of these pigments facilitate unique structural features like 4'-glycosylation and unusual substitution patterns of sugar moieties. Altogether at least 25 different anthocyanins have been reported from red onions, including two novel 5-carboxypyranocyanidin-derivatives. The quantitative content of anthocyanins in some red onion cultivars has been reported to be approximately 10% of the total
The ARMC5 gene shows extensive genetic variance in primary macronodular adrenocortical hyperplasia
Correa, Ricardo; Zilbermint, Mihail; Berthon, Annabel; Espiard, Stephanie; Batsis, Maria; Papadakis, Georgios Z.; Xekouki, Paraskevi; Lodish, Maya B.; Bertherat, Jerome; Faucz, Fabio R.; Stratakis, Constantine A.
2015-01-01
Objective Primary macronodular adrenal hyperplasia (PMAH) is a rare type of Cushing’s syndrome (CS) that results in increased cortisol production and bilateral enlargement of the adrenal glands. Recent work showed that the disease may be caused by germline and somatic mutations in the ARMC5 gene, a likely tumor-suppressor gene (TSG). We investigated 20 different adrenal nodules from one patient with PMAH for ARMC5 somatic sequence changes. Design All of the nodules where obtained from a single patient who underwent bilateral adrenalectomy. DNA was extracted by standard protocols and the ARMC5 sequence was determined by the Sanger method. Results Sixteen of 20 adrenocortical nodules harbored, in addition to what appeared to be the germline mutation, a second somatic variant. The p.Trp476* sequence change was present in all 20 nodules, as well as in normal tissue from the adrenal capsule, identifying it as the germline defect; each of the 16 other variants were found in different nodules: 6 were frame shift, 4 were missense, 3 were nonsense, and 1 was a splice site variation. Allelic losses were confirmed in 2 of the nodules. Conclusion This is the most genetic variance of the ARMC5 gene ever described in a single patient with PMAH: each of 16 adrenocortical nodules had a second new, “private”, and -in most cases- completely inactivating ARMC5 defect, in addition to the germline mutation. The data support the notion that ARMC5 is a TSG that needs a second, somatic hit, to mediate tumorigenesis leading to polyclonal nodularity; however, the driver of this extensive genetic variance of the second ARMC5 allele in adrenocortical tissue in the context of a germline defect and PMAH remains a mystery. PMID:26162405
Falls Prevention: Unique to Older Adults
... Prevention Sleep Problems Stroke Join our e-newsletter! Aging & Health A to Z Falls Prevention Unique to ... difficulties. Optimizing Management of Congestive Heart Failure and COPD Congestive Heart Failure (CHF) Many older people develop ...
Unique Biosignatures in Caves of All Lithologies
NASA Astrophysics Data System (ADS)
Boston, P. J.; Schubert, K. E.; Gomez, E.; Conrad, P. G.
2015-10-01
Unique maze-like microbial communities on cave surfaces on all lithologies all over the world are an excellent candidate biosignatures for life detection missions into caves and other extraterrestrial environments.
Unique Ideas in a New Facility
ERIC Educational Resources Information Center
Hamby, G. W.
1977-01-01
Unique features of a new vocational agriculture department facility in Diamond, Missouri, are described, which include an overhead hoist system, arc welders, storage areas, paint room, and greenhouse. (TA)
Cosmological N-body simulations with suppressed variance
NASA Astrophysics Data System (ADS)
Angulo, Raul E.; Pontzen, Andrew
2016-10-01
We present and test a method that dramatically reduces variance arising from the sparse sampling of wavemodes in cosmological simulations. The method uses two simulations which are fixed (the initial Fourier mode amplitudes are fixed to the ensemble average power spectrum) and paired (with initial modes exactly out of phase). We measure the power spectrum, monopole and quadrupole redshift-space correlation functions, halo mass function and reduced bispectrum at z = 1. By these measures, predictions from a fixed pair can be as precise on non-linear scales as an average over 50 traditional simulations. The fixing procedure introduces a non-Gaussian correction to the initial conditions; we give an analytic argument showing why the simulations are still able to predict the mean properties of the Gaussian ensemble. We anticipate that the method will drive down the computational time requirements for accurate large-scale explorations of galaxy bias and clustering statistics, and facilitating the use of numerical simulations in cosmological data interpretation.