Paired-Associate Learning Ability Accounts for Unique Variance in Orthographic Learning
ERIC Educational Resources Information Center
Wang, Hua-Chen; Wass, Malin; Castles, Anne
2017-01-01
Paired-associate learning is a dynamic measure of the ability to form new links between two items. This study aimed to investigate whether paired-associate learning ability is associated with success in orthographic learning, and if so, whether it accounts for unique variance beyond phonological decoding ability and orthographic knowledge. A group…
On the Relations among Regular, Equal Unique Variances, and Image Factor Analysis Models.
ERIC Educational Resources Information Center
Hayashi, Kentaro; Bentler, Peter M.
2000-01-01
Investigated the conditions under which the matrix of factor loadings from the factor analysis model with equal unique variances will give a good approximation to the matrix of factor loadings from the regular factor analysis model. Extends the results to the image factor analysis model. Discusses implications for practice. (SLD)
Vitezica, Zulma G.; Varona, Luis; Legarra, Andres
2013-01-01
Genomic evaluation models can fit additive and dominant SNP effects. Under quantitative genetics theory, additive or “breeding” values of individuals are generated by substitution effects, which involve both “biological” additive and dominant effects of the markers. Dominance deviations include only a portion of the biological dominant effects of the markers. Additive variance includes variation due to the additive and dominant effects of the markers. We describe a matrix of dominant genomic relationships across individuals, D, which is similar to the G matrix used in genomic best linear unbiased prediction. This matrix can be used in a mixed-model context for genomic evaluations or to estimate dominant and additive variances in the population. From the “genotypic” value of individuals, an alternative parameterization defines additive and dominance as the parts attributable to the additive and dominant effect of the markers. This approach underestimates the additive genetic variance and overestimates the dominance variance. Transforming the variances from one model into the other is trivial if the distribution of allelic frequencies is known. We illustrate these results with mouse data (four traits, 1884 mice, and 10,946 markers) and simulated data (2100 individuals and 10,000 markers). Variance components were estimated correctly in the model, considering breeding values and dominance deviations. For the model considering genotypic values, the inclusion of dominant effects biased the estimate of additive variance. Genomic models were more accurate for the estimation of variance components than their pedigree-based counterparts. PMID:24121775
Estimation of Additive, Dominance, and Imprinting Genetic Variance Using Genomic Data
Lopes, Marcos S.; Bastiaansen, John W. M.; Janss, Luc; Knol, Egbert F.; Bovenhuis, Henk
2015-01-01
Traditionally, exploration of genetic variance in humans, plants, and livestock species has been limited mostly to the use of additive effects estimated using pedigree data. However, with the development of dense panels of single-nucleotide polymorphisms (SNPs), the exploration of genetic variation of complex traits is moving from quantifying the resemblance between family members to the dissection of genetic variation at individual loci. With SNPs, we were able to quantify the contribution of additive, dominance, and imprinting variance to the total genetic variance by using a SNP regression method. The method was validated in simulated data and applied to three traits (number of teats, backfat, and lifetime daily gain) in three purebred pig populations. In simulated data, the estimates of additive, dominance, and imprinting variance were very close to the simulated values. In real data, dominance effects account for a substantial proportion of the total genetic variance (up to 44%) for these traits in these populations. The contribution of imprinting to the total phenotypic variance of the evaluated traits was relatively small (1–3%). Our results indicate a strong relationship between additive variance explained per chromosome and chromosome length, which has been described previously for other traits in other species. We also show that a similar linear relationship exists for dominance and imprinting variance. These novel results improve our understanding of the genetic architecture of the evaluated traits and shows promise to apply the SNP regression method to other traits and species, including human diseases. PMID:26438289
Pearcy, Benjamin T D; McEvoy, Peter M; Roberts, Lynne D
2017-02-01
This study extends knowledge about the relationship of Internet Gaming Disorder (IGD) to other established mental disorders by exploring comorbidities with anxiety, depression, Attention Deficit Hyperactivity Disorder (ADHD), and obsessive compulsive disorder (OCD), and assessing whether IGD accounts for unique variance in distress and disability. An online survey was completed by a convenience sample that engages in Internet gaming (N = 404). Participants meeting criteria for IGD based on the Personal Internet Gaming Disorder Evaluation-9 (PIE-9) reported higher comorbidity with depression, OCD, ADHD, and anxiety compared with those who did not meet the IGD criteria. IGD explained a small proportion of unique variance in distress (1%) and disability (3%). IGD accounted for a larger proportion of unique variance in disability than anxiety and ADHD, and a similar proportion to depression. Replications with clinical samples using longitudinal designs and structured diagnostic interviews are required.
McGuigan, Katrina; Aguirre, J. David; Blows, Mark W.
2015-01-01
How new mutations contribute to genetic variation is a key question in biology. Although the evolutionary fate of an allele is largely determined by its heterozygous effect, most estimates of mutational variance and mutational effects derive from highly inbred lines, where new mutations are present in homozygous form. In an attempt to overcome this limitation, middle-class neighborhood (MCN) experiments have been used to assess the fitness effect of new mutations in heterozygous form. However, because MCN populations harbor substantial standing genetic variance, estimates of mutational variance have not typically been available from such experiments. Here we employ a modification of the animal model to analyze data from 22 generations of Drosophila serrata bred in an MCN design. Mutational heritability, measured for eight cuticular hydrocarbons, 10 wing-shape traits, and wing size in this outbred genetic background, ranged from 0.0006 to 0.006 (with one exception), a similar range to that reported from studies employing inbred lines. Simultaneously partitioning the additive and mutational variance in the same outbred population allowed us to quantitatively test the ability of mutation-selection balance models to explain the observed levels of additive and mutational genetic variance. The Gaussian allelic approximation and house-of-cards models, which assume real stabilizing selection on single traits, both overestimated the genetic variance maintained at equilibrium, but the house-of-cards model was a closer fit to the data. This analytical approach has the potential to be broadly applied, expanding our understanding of the dynamics of genetic variance in natural populations. PMID:26384357
ERIC Educational Resources Information Center
Miller, Geoffrey F.; Penke, Lars
2007-01-01
Most theories of human mental evolution assume that selection favored higher intelligence and larger brains, which should have reduced genetic variance in both. However, adult human intelligence remains highly heritable, and is genetically correlated with brain size. This conflict might be resolved by estimating the coefficient of additive genetic…
NASA Astrophysics Data System (ADS)
Soltani-Mohammadi, Saeed; Safa, Mohammad; Mokhtari, Hadi
2016-10-01
One of the most important stages in complementary exploration is optimal designing the additional drilling pattern or defining the optimum number and location of additional boreholes. Quite a lot research has been carried out in this regard in which for most of the proposed algorithms, kriging variance minimization as a criterion for uncertainty assessment is defined as objective function and the problem could be solved through optimization methods. Although kriging variance implementation is known to have many advantages in objective function definition, it is not sensitive to local variability. As a result, the only factors evaluated for locating the additional boreholes are initial data configuration and variogram model parameters and the effects of local variability are omitted. In this paper, with the goal of considering the local variability in boundaries uncertainty assessment, the application of combined variance is investigated to define the objective function. Thus in order to verify the applicability of the proposed objective function, it is used to locate the additional boreholes in Esfordi phosphate mine through the implementation of metaheuristic optimization methods such as simulated annealing and particle swarm optimization. Comparison of results from the proposed objective function and conventional methods indicates that the new changes imposed on the objective function has caused the algorithm output to be sensitive to the variations of grade, domain's boundaries and the thickness of mineralization domain. The comparison between the results of different optimization algorithms proved that for the presented case the application of particle swarm optimization is more appropriate than simulated annealing.
Forsberg, Simon K. G.; Andreatta, Matthew E.; Huang, Xin-Yuan; Danku, John; Salt, David E.; Carlborg, Örjan
2015-01-01
Genome-wide association (GWA) analyses have generally been used to detect individual loci contributing to the phenotypic diversity in a population by the effects of these loci on the trait mean. More rarely, loci have also been detected based on variance differences between genotypes. Several hypotheses have been proposed to explain the possible genetic mechanisms leading to such variance signals. However, little is known about what causes these signals, or whether this genetic variance-heterogeneity reflects mechanisms of importance in natural populations. Previously, we identified a variance-heterogeneity GWA (vGWA) signal for leaf molybdenum concentrations in Arabidopsis thaliana. Here, fine-mapping of this association reveals that the vGWA emerges from the effects of three independent genetic polymorphisms that all are in strong LD with the markers displaying the genetic variance-heterogeneity. By revealing the genetic architecture underlying this vGWA signal, we uncovered the molecular source of a significant amount of hidden additive genetic variation or “missing heritability”. Two of the three polymorphisms underlying the genetic variance-heterogeneity are promoter variants for Molybdate transporter 1 (MOT1), and the third a variant located ~25 kb downstream of this gene. A fourth independent association was also detected ~600 kb upstream of MOT1. Use of a T-DNA knockout allele highlights Copper Transporter 6; COPT6 (AT2G26975) as a strong candidate gene for this association. Our results show that an extended LD across a complex locus including multiple functional alleles can lead to a variance-heterogeneity between genotypes in natural populations. Further, they provide novel insights into the genetic regulation of ion homeostasis in A. thaliana, and empirically confirm that variance-heterogeneity based GWA methods are a valuable tool to detect novel associations of biological importance in natural populations. PMID:26599497
Gasparini, Clelia; Devigili, Alessandro; Dosselli, Ryan; Pilastro, Andrea
2013-01-01
In polyandrous species, a male's reproductive success depends on his fertilization capability and traits enhancing competitive fertilization success will be under strong, directional selection. This leads to the prediction that these traits should show stronger condition dependence and larger genetic variance than other traits subject to weaker or stabilizing selection. While empirical evidence of condition dependence in postcopulatory traits is increasing, the comparison between sexually selected and ‘control’ traits is often based on untested assumption concerning the different strength of selection acting on these traits. Furthermore, information on selection in the past is essential, as both condition dependence and genetic variance of a trait are likely to be influenced by the pattern of selection acting historically on it. Using the guppy (Poecilia reticulata), a livebearing fish with high levels of multiple paternity, we performed three independent experiments on three ejaculate quality traits, sperm number, velocity, and size, which have been previously shown to be subject to strong, intermediate, and weak directional postcopulatory selection, respectively. First, we conducted an inbreeding experiment to determine the pattern of selection in the past. Second, we used a diet restriction experiment to estimate their level of condition dependence. Third, we used a half-sib/full-sib mating design to estimate the coefficients of additive genetic variance (CVA) underlying these traits. Additionally, using a simulated predator evasion test, we showed that both inbreeding and diet restriction significantly reduced condition. According to predictions, sperm number showed higher inbreeding depression, stronger condition dependence, and larger CVA than sperm velocity and sperm size. The lack of significant genetic correlation between sperm number and velocity suggests that the former may respond to selection independently one from other ejaculate quality traits
Huchard, E; Charmantier, A; English, S; Bateman, A; Nielsen, J F; Clutton-Brock, T
2014-09-01
Individual variation in growth is high in cooperative breeders and may reflect plastic divergence in developmental trajectories leading to breeding vs. helping phenotypes. However, the relative importance of additive genetic variance and developmental plasticity in shaping growth trajectories is largely unknown in cooperative vertebrates. This study exploits weekly sequences of body mass from birth to adulthood to investigate sources of variance in, and covariance between, early and later growth in wild meerkats (Suricata suricatta), a cooperative mongoose. Our results indicate that (i) the correlation between early growth (prior to nutritional independence) and adult mass is positive but weak, and there are frequent changes (compensatory growth) in post-independence growth trajectories; (ii) among parameters describing growth trajectories, those describing growth rate (prior to and at nutritional independence) show undetectable heritability while associated size parameters (mass at nutritional independence and asymptotic mass) are moderately heritable (0.09 ≤ h(2) < 0.3); and (iii) additive genetic effects, rather than early environmental effects, mediate the covariance between early growth and adult mass. These results reveal that meerkat growth trajectories remain plastic throughout development, rather than showing early and irreversible divergence, and that the weak effects of early growth on adult mass, an important determinant of breeding success, are partly genetic. In contrast to most cooperative invertebrates, the acquisition of breeding status is often determined after sexual maturity and strongly impacted by chance in many cooperative vertebrates, who may therefore retain the ability to adjust their morphology to environmental changes and social opportunities arising throughout their development, rather than specializing early.
McFarlane, S Eryn; Gorrell, Jamieson C; Coltman, David W; Humphries, Murray M; Boutin, Stan; McAdam, Andrew G
2014-01-01
A trait must genetically correlate with fitness in order to evolve in response to natural selection, but theory suggests that strong directional selection should erode additive genetic variance in fitness and limit future evolutionary potential. Balancing selection has been proposed as a mechanism that could maintain genetic variance if fitness components trade off with one another and has been invoked to account for empirical observations of higher levels of additive genetic variance in fitness components than would be expected from mutation–selection balance. Here, we used a long-term study of an individually marked population of North American red squirrels (Tamiasciurus hudsonicus) to look for evidence of (1) additive genetic variance in lifetime reproductive success and (2) fitness trade-offs between fitness components, such as male and female fitness or fitness in high- and low-resource environments. “Animal model” analyses of a multigenerational pedigree revealed modest maternal effects on fitness, but very low levels of additive genetic variance in lifetime reproductive success overall as well as fitness measures within each sex and environment. It therefore appears that there are very low levels of direct genetic variance in fitness and fitness components in red squirrels to facilitate contemporary adaptation in this population. PMID:24963372
Kumar, Satish; Molloy, Claire; Muñoz, Patricio; Daetwyler, Hans; Chagné, David; Volz, Richard
2015-01-01
The nonadditive genetic effects may have an important contribution to total genetic variation of phenotypes, so estimates of both the additive and nonadditive effects are desirable for breeding and selection purposes. Our main objectives were to: estimate additive, dominance and epistatic variances of apple (Malus × domestica Borkh.) phenotypes using relationship matrices constructed from genome-wide dense single nucleotide polymorphism (SNP) markers; and compare the accuracy of genomic predictions using genomic best linear unbiased prediction models with or without including nonadditive genetic effects. A set of 247 clonally replicated individuals was assessed for six fruit quality traits at two sites, and also genotyped using an Illumina 8K SNP array. Across several fruit quality traits, the additive, dominance, and epistatic effects contributed about 30%, 16%, and 19%, respectively, to the total phenotypic variance. Models ignoring nonadditive components yielded upwardly biased estimates of additive variance (heritability) for all traits in this study. The accuracy of genomic predicted genetic values (GEGV) varied from about 0.15 to 0.35 for various traits, and these were almost identical for models with or without including nonadditive effects. However, models including nonadditive genetic effects further reduced the bias of GEGV. Between-site genotypic correlations were high (>0.85) for all traits, and genotype-site interaction accounted for <10% of the phenotypic variability. The accuracy of prediction, when the validation set was present only at one site, was generally similar for both sites, and varied from about 0.50 to 0.85. The prediction accuracies were strongly influenced by trait heritability, and genetic relatedness between the training and validation families. PMID:26497141
Careau, Vincent; Wolak, Matthew E; Carter, Patrick A; Garland, Theodore
2015-11-22
Given the pace at which human-induced environmental changes occur, a pressing challenge is to determine the speed with which selection can drive evolutionary change. A key determinant of adaptive response to multivariate phenotypic selection is the additive genetic variance-covariance matrix ( G: ). Yet knowledge of G: in a population experiencing new or altered selection is not sufficient to predict selection response because G: itself evolves in ways that are poorly understood. We experimentally evaluated changes in G: when closely related behavioural traits experience continuous directional selection. We applied the genetic covariance tensor approach to a large dataset (n = 17 328 individuals) from a replicated, 31-generation artificial selection experiment that bred mice for voluntary wheel running on days 5 and 6 of a 6-day test. Selection on this subset of G: induced proportional changes across the matrix for all 6 days of running behaviour within the first four generations. The changes in G: induced by selection resulted in a fourfold slower-than-predicted rate of response to selection. Thus, selection exacerbated constraints within G: and limited future adaptive response, a phenomenon that could have profound consequences for populations facing rapid environmental change.
Yang, Kai; Huang, Shih-Ying; Packard, Nathan J.; Boone, John M.
2010-01-01
Purpose: A simplified linear model approach was proposed to accurately model the response of a flat panel detector used for breast CT (bCT). Methods: Individual detector pixel mean and variance were measured from bCT projection images acquired both in air and with a polyethylene cylinder, with the detector operating in both fixed low gain and dynamic gain mode. Once the coefficients of the linear model are determined, the fractional additive noise can be used as a quantitative metric to evaluate the system’s efficiency in utilizing x-ray photons, including the performance of different gain modes of the detector. Results: Fractional additive noise increases as the object thickness increases or as the radiation dose to the detector decreases. For bCT scan techniques on the UC Davis prototype scanner (80 kVp, 500 views total, 30 frames∕s), in the low gain mode, additive noise contributes 21% of the total pixel noise variance for a 10 cm object and 44% for a 17 cm object. With the dynamic gain mode, additive noise only represents approximately 2.6% of the total pixel noise variance for a 10 cm object and 7.3% for a 17 cm object. Conclusions: The existence of the signal-independent additive noise is the primary cause for a quadratic relationship between bCT noise variance and the inverse of radiation dose at the detector. With the knowledge of the additive noise contribution to experimentally acquired images, system modifications can be made to reduce the impact of additive noise and improve the quantum noise efficiency of the bCT system. PMID:20831059
Yang Kai; Huang, Shih-Ying; Packard, Nathan J.; Boone, John M.
2010-07-15
Purpose: A simplified linear model approach was proposed to accurately model the response of a flat panel detector used for breast CT (bCT). Methods: Individual detector pixel mean and variance were measured from bCT projection images acquired both in air and with a polyethylene cylinder, with the detector operating in both fixed low gain and dynamic gain mode. Once the coefficients of the linear model are determined, the fractional additive noise can be used as a quantitative metric to evaluate the system's efficiency in utilizing x-ray photons, including the performance of different gain modes of the detector. Results: Fractional additive noise increases as the object thickness increases or as the radiation dose to the detector decreases. For bCT scan techniques on the UC Davis prototype scanner (80 kVp, 500 views total, 30 frames/s), in the low gain mode, additive noise contributes 21% of the total pixel noise variance for a 10 cm object and 44% for a 17 cm object. With the dynamic gain mode, additive noise only represents approximately 2.6% of the total pixel noise variance for a 10 cm object and 7.3% for a 17 cm object. Conclusions: The existence of the signal-independent additive noise is the primary cause for a quadratic relationship between bCT noise variance and the inverse of radiation dose at the detector. With the knowledge of the additive noise contribution to experimentally acquired images, system modifications can be made to reduce the impact of additive noise and improve the quantum noise efficiency of the bCT system.
Kirkpatrick, Robert M; McGue, Matt; Iacono, William G
2015-03-01
The present study of general cognitive ability attempts to replicate and extend previous investigations of a biometric moderator, family-of-origin socioeconomic status (SES), in a sample of 2,494 pairs of adolescent twins, non-twin biological siblings, and adoptive siblings assessed with individually administered IQ tests. We hypothesized that SES would covary positively with additive-genetic variance and negatively with shared-environmental variance. Important potential confounds unaddressed in some past studies, such as twin-specific effects, assortative mating, and differential heritability by trait level, were found to be negligible. In our main analysis, we compared models by their sample-size corrected AIC, and base our statistical inference on model-averaged point estimates and standard errors. Additive-genetic variance increased with SES-an effect that was statistically significant and robust to model specification. We found no evidence that SES moderated shared-environmental influence. We attempt to explain the inconsistent replication record of these effects, and provide suggestions for future research.
Kirkpatrick, Robert M.; McGue, Matt; Iacono, William G.
2015-01-01
The present study of general cognitive ability attempts to replicate and extend previous investigations of a biometric moderator, family-of-origin socioeconomic status (SES), in a sample of 2,494 pairs of adolescent twins, non-twin biological siblings, and adoptive siblings assessed with individually administered IQ tests. We hypothesized that SES would covary positively with additive-genetic variance and negatively with shared-environmental variance. Important potential confounds unaddressed in some past studies, such as twin-specific effects, assortative mating, and differential heritability by trait level, were found to be negligible. In our main analysis, we compared models by their sample-size corrected AIC, and base our statistical inference on model-averaged point estimates and standard errors. Additive-genetic variance increased with SES—an effect that was statistically significant and robust to model specification. We found no evidence that SES moderated shared-environmental influence. We attempt to explain the inconsistent replication record of these effects, and provide suggestions for future research. PMID:25539975
Caine, A; Knapton, D M; Mueller, R F; Congdon, P J; Haigh, D
1989-01-01
A female with multiple dysmorphic features was found to have an unbalanced karyotype with duplication of the distal long arm of chromosome 17 and deletion of the terminal region of the short arm of chromosome 12. This was derived from a reciprocal translocation in the mother, 46,XX,t(12;17)(p13.3;q23). Clinical findings are presented and comparison with other reported cases of distal 17q duplication shows several unique features in our case. Images PMID:2810342
Reid, Jane M; Arcese, Peter; Keller, Lukas F; Losdat, Sylvain
2014-08-01
Ongoing evolution of polyandry, and consequent extra-pair reproduction in socially monogamous systems, is hypothesized to be facilitated by indirect selection stemming from cross-sex genetic covariances with components of male fitness. Specifically, polyandry is hypothesized to create positive genetic covariance with male paternity success due to inevitable assortative reproduction, driving ongoing coevolution. However, it remains unclear whether such covariances could or do emerge within complex polyandrous systems. First, we illustrate that genetic covariances between female extra-pair reproduction and male within-pair paternity success might be constrained in socially monogamous systems where female and male additive genetic effects can have opposing impacts on the paternity of jointly reared offspring. Second, we demonstrate nonzero additive genetic variance in female liability for extra-pair reproduction and male liability for within-pair paternity success, modeled as direct and associative genetic effects on offspring paternity, respectively, in free-living song sparrows (Melospiza melodia). The posterior mean additive genetic covariance between these liabilities was slightly positive, but the credible interval was wide and overlapped zero. Therefore, although substantial total additive genetic variance exists, the hypothesis that ongoing evolution of female extra-pair reproduction is facilitated by genetic covariance with male within-pair paternity success cannot yet be definitively supported or rejected either conceptually or empirically.
Collet, J M; Blows, M W
2014-11-01
After choosing a first mate, polyandrous females have access to a range of opportunities to bias paternity, such as repeating matings with the preferred male, facilitating fertilization from the best sperm or differentially investing in offspring according to their sire. Female ability to bias paternity after a first mating has been demonstrated in a few species, but unambiguous evidence remains limited by the access to complex behaviours, sperm storage organs and fertilization processes within females. Even when found at the phenotypic level, the potential evolution of any mechanism allowing females to bias paternity other than mate choice remains little explored. Using a large population of pedigreed females, we developed a simple test to determine whether there is additive genetic variation in female ability to bias paternity after a first, chosen, mating. We applied this method in the highly polyandrous Drosophila serrata, giving females the opportunity to successively mate with two males ad libitum. We found that despite high levels of polyandry (females mated more than once per day), the first mate choice was a significant predictor of male total reproductive success. Importantly, there was no detectable genetic variance in female ability to bias paternity beyond mate choice. Therefore, whether or not females can bias paternity before or after copulation, their role on the evolution of sexual male traits is likely to be limited to their first mate choice in D. serrata.
Reiser, Oliver
2016-09-20
) the tendency for ligand exchange in Cu(I)L2 assemblies allows the efficient synthesis of heteroleptic Cu(I)LL' complexes to tune the steric and electronic properties and also might coordinate and thus activate substrates in the course of a reaction in addition to electron transfer. Moreover, new photoredox cycles have also been discovered beyond the visible-light-induced Cu(I)* → Cu(II) electron transfer that is arguably best known: examples of the Cu(II)* → Cu(I) and Cu(I)* → Cu(0) transitions have been realized, greatly broadening the potential for copper-based photoredox-catalyzed transformations. Finally, a number of organic transformations that are unique to Cu(I) photoredox catalysts have been discovered.
Recio, Sergio A; Iliescu, Adela F; Bergés, Germán D; Gil, Marta; de Brugada, Isabel
2016-04-01
It has been suggested that human perceptual learning could be explained in terms of a better memory encoding of the unique features during intermixed exposure. However, it is possible that a location bias could play a relevant role in explaining previous results of perceptual learning studies using complex visual stimuli. If this were the case, the only relevant feature would be the location, rather than the content, of the unique features. To further explore this possibility, we attempted to replicate the results of Lavis, Kadib, Mitchell, and Hall (2011, Experiment 2), which showed that additional exposure to the unique elements resulted in better discrimination than simple intermixed exposure. We manipulated the location of the unique elements during the additional exposure. In one experiment, they were located in the same position as that when presented together with the common element. In another experiment, the unique elements were located in the center of the screen, regardless of where they were located together with the common element. Our results showed that additional exposure only improved discrimination when the unique elements were presented in the same position as when they were presented together with the common element. The results reported here do not provide support for the explanation of the effects of additional exposure of the unique elements in terms of a better memory encoding and instead suggest an explanation in terms of location bias.
NASA Technical Reports Server (NTRS)
Smalheer, C. V.
1973-01-01
The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.
Rabalakos, Constantinos; Wulff, William D
2008-10-15
A new catalyst is designed, synthesized, and evaluated for the asymmetric Michael addition of nitroalkanes to nitroalkenes. The obdurate nature of this reaction has made this a formidable challenge to subdue by asymmetric catalysis. The catalyst design includes a thiourea function to activate the nitroalkene by a double H-bond and a 4-dimethylaminopyridine unit to deprotonate the nitroalkane and to bind the resulting nitronate anion also by a double H-bond. The chiral scaffold for the catalyst is 2,2'-diamino-1,1'-binaphthalene (BINAM), and a bis-conjugate is prepared by the attachment of the thiourea unit and the dimethylaminopyridine moiety (DMAP) via the two amino groups. The resulting catalyst will effect the reaction of nitroalkanes to a variety of nitrostyrenes and gives excellent asymmetric inductions (91-95% ee) over a range of 10 substrates. Remarkably, the asymmetric induction increases with decreasing catalyst loading with the optimal compromise between rate and induction at a loading of 2 mol %.
Bobbert, F S L; Lietaert, K; Eftekhari, A A; Pouran, B; Ahmadi, S M; Weinans, H; Zadpoor, A A
2017-02-16
Porous biomaterials that simultaneously mimic the topological, mechanical, and mass transport properties of bone are in great demand but are rarely found in the literature. In this study, we rationally designed and additively manufactured (AM) porous metallic biomaterials based on four different types of triply periodic minimal surfaces (TPMS) that mimic the properties of bone to an unprecedented level of multi-physics detail. Sixteen different types of porous biomaterials were rationally designed and fabricated using selective laser melting (SLM) from a titanium alloy (Ti-6Al-4V). The topology, quasi-static mechanical properties, fatigue resistance, and permeability of the developed biomaterials were then characterized. In terms of topology, the biomaterials resembled the morphological properties of trabecular bone including mean surface curvatures close to zero. The biomaterials showed a favorable but rare combination of relatively low elastic properties in the range of those observed for trabecular bone and high yield strengths exceeding those reported for cortical bone. This combination allows for simultaneously avoiding stress shielding, while providing ample mechanical support for bone tissue regeneration and osseointegration. Furthermore, as opposed to other AM porous biomaterials developed to date for which the fatigue endurance limit has been found to be ≈20% of their yield (or plateau) stress, some of the biomaterials developed in the current study show extremely high fatigue resistance with endurance limits up to 60% of their yield stress. It was also found that the permeability values measured for the developed biomaterials were in the range of values reported for trabecular bone. In summary, the developed porous metallic biomaterials based on TPMS mimic the topological, mechanical, and physical properties of trabecular bone to a great degree. These properties make them potential candidates to be applied as parts of orthopedic implants and/or as bone
Cosmology without cosmic variance
Bernstein, Gary M.; Cai, Yan -Chuan
2011-10-01
The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing the number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.
Cosmology without cosmic variance
Bernstein, Gary M.; Cai, Yan -Chuan
2011-10-01
The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing themore » number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.« less
Nonlinear Epigenetic Variance: Review and Simulations
ERIC Educational Resources Information Center
Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.
2010-01-01
We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…
NASA Astrophysics Data System (ADS)
Chabuda, Krzysztof; Leroux, Ian D.; Demkowicz-Dobrzański, Rafał
2016-08-01
The instability of an atomic clock is characterized by the Allan variance, a measure widely used to describe the noise of frequency standards. We provide an explicit method to find the ultimate bound on the Allan variance of an atomic clock in the most general scenario where N atoms are prepared in an arbitrarily entangled state and arbitrary measurement and feedback are allowed, including those exploiting coherences between succeeding interrogation steps. While the method is rigorous and general, it becomes numerically challenging for large N and long averaging times.
Conversations across Meaning Variance
ERIC Educational Resources Information Center
Cordero, Alberto
2013-01-01
Progressive interpretations of scientific theories have long been denounced as naive, because of the inescapability of meaning variance. The charge reportedly applies to recent realist moves that focus on theory-parts rather than whole theories. This paper considers the question of what "theory-parts" of epistemic significance (if any) relevantly…
Spectral Ambiguity of Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
1996-01-01
We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.
Jager, Justin; Bornstein, Marc H.; Diane, L. Putnick; Hendricks, Charlene
2012-01-01
Using the Family Assessment Device (FAD; Epstein, Baldwin, & Bishop, 1983) and incorporating the perspectives of adolescent, mother, and father, this study examined each family member's “unique perspective” or non-shared, idiosyncratic view of the family. To do so we used a modified multitrait-multimethod confirmatory factor analysis that (1) isolated for each family member's six reports of family dysfunction the non-shared variance (a combination of variance idiosyncratic to the individual and measurement error) from variance shared by one or more family members and (2) extracted common variance across each family member's set of non-shared variances. The sample included 128 families from a U.S. East Coast metropolitan area. Each family member's unique perspective generalized across his or her different reports of family dysfunction and accounted for a sizable proportion of his or her own variance in reports of family dysfunction. Additionally, after holding level of dysfunction constant across families and controlling for a family's shared variance (agreement regarding family dysfunction), each family member's unique perspective was associated with his or her own adjustment. Future applications and competing alternatives for what these “unique perspectives” reflect about the family are discussed. PMID:22545933
Nominal analysis of "variance".
Weiss, David J
2009-08-01
Nominal responses are the natural way for people to report actions or opinions. Because nominal responses do not generate numerical data, they have been underutilized in behavioral research. On those occasions in which nominal responses are elicited, the responses are customarily aggregated over people or trials so that large-sample statistics can be employed. A new analysis is proposed that directly associates differences among responses with particular sources in factorial designs. A pair of nominal responses either matches or does not; when responses do not match, they vary. That analogue to variance is incorporated in the nominal analysis of "variance" (NANOVA) procedure, wherein the proportions of matches associated with sources play the same role as do sums of squares in an ANOVA. The NANOVA table is structured like an ANOVA table. The significance levels of the N ratios formed by comparing proportions are determined by resampling. Fictitious behavioral examples featuring independent groups and repeated measures designs are presented. A Windows program for the analysis is available.
Systems Engineering Programmatic Estimation Using Technology Variance
NASA Technical Reports Server (NTRS)
Mog, Robert A.
2000-01-01
Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed on the subsystems and components comprising the system of interest. Technological "return" and "variation" parameters are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.
Systems Engineering Programmatic Estimation Using Technology Variance
NASA Technical Reports Server (NTRS)
Mog, Robert A.
2000-01-01
Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed oil the subsystems and components comprising the system of interest. Technological "returns" and "variation" parameters, are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.
Estimating Modifying Effect of Age on Genetic and Environmental Variance Components in Twin Models
He, Liang; Sillanpää, Mikko J.; Silventoinen, Karri; Kaprio, Jaakko; Pitkäniemi, Janne
2016-01-01
Twin studies have been adopted for decades to disentangle the relative genetic and environmental contributions for a wide range of traits. However, heritability estimation based on the classical twin models does not take into account dynamic behavior of the variance components over age. Varying variance of the genetic component over age can imply the existence of gene–environment (G × E) interactions that general genome-wide association studies (GWAS) fail to capture, which may lead to the inconsistency of heritability estimates between twin design and GWAS. Existing parametric G × E interaction models for twin studies are limited by assuming a linear or quadratic form of the variance curves with respect to a moderator that can, however, be overly restricted in reality. Here we propose spline-based approaches to explore the variance curves of the genetic and environmental components. We choose the additive genetic, common, and unique environmental variance components (ACE) model as the starting point. We treat the component variances as variance functions with respect to age modeled by B-splines or P-splines. We develop an empirical Bayes method to estimate the variance curves together with their confidence bands and provide an R package for public use. Our simulations demonstrate that the proposed methods accurately capture dynamic behavior of the component variances in terms of mean square errors with a data set of >10,000 twin pairs. Using the proposed methods as an alternative and major extension to the classical twin models, our analyses with a large-scale Finnish twin data set (19,510 MZ twins and 27,312 DZ same-sex twins) discover that the variances of the A, C, and E components for body mass index (BMI) change substantially across life span in different patterns and the heritability of BMI drops to ∼50% after middle age. The results further indicate that the decline of heritability is due to increasing unique environmental variance, which provides
Variance analysis by use of a low cost desk top calculator.
González Revaldería, J; Villafruela, J J; Sabater, J; Lamas, S; Ortuño, J
1986-01-01
A simple program for an HP-97 desk top calculator, which can be adapted to an HP-67, is presented. This program detects the presence of an added component of variance in any series classified with a unique criterion. Each series can be formed by any number of data. The program supplies additional information about this component. A brief theoretical description and a practical example are also included.
Yamazaki, Yasuomi; Ishitani, Osamu
2017-04-05
The addition of a tertiary phosphine and O2 to reaction solutions strongly affected the reactivity and selectivity of coupling reactions between transition metal complexes. The Mizoroki-Heck reaction between metal complexes with bromo and those with vinyl groups in the diimine ligand did not proceed using Pd(OAc)2 in the presence of 2-dicyclohexylphosphino-2',6'-dimethoxybiphenyl (Sphos) under Ar but proceeded selectively after injection of air into the reaction vessel. In the absence of the phosphine ligand, on the other hand, not only the Mizoroki-Heck reaction but also a homo-coupling reaction between the metal complexes with the bromo groups proceeded at the same time. Mechanistic investigation showed that nanoparticles of Pd species were produced in the absence of the phosphine ligand and worked as catalysts for both the Mizoroki-Heck and homo-coupling reactions. On the other hand, larger Pd particles, which were produced in the presence of Sphos but after addition of air for oxidising Sphos, selectively catalysed the Mizoroki-Heck reaction. 'Molecular' Pd species that were stabilised in the presence of non-oxidised Sphos could not catalyse both coupling reactions under the reaction conditions. Based on these results, reaction conditions were established for the selective progress of the Mizoroki-Heck and the homo-coupling reactions.
Understanding gender variance in children and adolescents.
Simons, Lisa K; Leibowitz, Scott F; Hidalgo, Marco A
2014-06-01
Gender variance is an umbrella term used to describe gender identity, expression, or behavior that falls outside of culturally defined norms associated with a specific gender. In recent years, growing media coverage has heightened public awareness about gender variance in childhood and adolescence, and an increasing number of referrals to clinics specializing in care for gender-variant youth have been reported in the United States. Gender-variant expression, behavior, and identity may present in childhood and adolescence in a number of ways, and youth with gender variance have unique health needs. For those experiencing gender dysphoria, or distress encountered by the discordance between biological sex and gender identity, puberty is often an exceptionally challenging time. Pediatric primary care providers may be families' first resource for education and support, and they play a critical role in supporting the health of youth with gender variance by screening for psychosocial problems and health risks, referring for gender-specific mental health and medical care, and providing ongoing advocacy and support.
Sampling Errors of Variance Components.
ERIC Educational Resources Information Center
Sanders, Piet F.
A study on sampling errors of variance components was conducted within the framework of generalizability theory by P. L. Smith (1978). The study used an intuitive approach for solving the problem of how to allocate the number of conditions to different facets in order to produce the most stable estimate of the universe score variance. Optimization…
Variance estimation for nucleotide substitution models.
Chen, Weishan; Wang, Hsiuying
2015-09-01
The current variance estimators for most evolutionary models were derived when a nucleotide substitution number estimator was approximated with a simple first order Taylor expansion. In this study, we derive three variance estimators for the F81, F84, HKY85 and TN93 nucleotide substitution models, respectively. They are obtained using the second order Taylor expansion of the substitution number estimator, the first order Taylor expansion of a squared deviation and the second order Taylor expansion of a squared deviation, respectively. These variance estimators are compared with the existing variance estimator in terms of a simulation study. It shows that the variance estimator, which is derived using the second order Taylor expansion of a squared deviation, is more accurate than the other three estimators. In addition, we also compare these estimators with an estimator derived by the bootstrap method. The simulation shows that the performance of this bootstrap estimator is similar to the estimator derived by the second order Taylor expansion of a squared deviation. Since the latter one has an explicit form, it is more efficient than the bootstrap estimator.
A proxy for variance in dense matching over homogeneous terrain
NASA Astrophysics Data System (ADS)
Altena, Bas; Cockx, Liesbet; Goedemé, Toon
2014-05-01
Automation in photogrammetry and avionics have brought highly autonomous UAV mapping solutions on the market. These systems have great potential for geophysical research, due to their mobility and simplicity of work. Flight planning can be done on site and orientation parameters are estimated automatically. However, one major drawback is still present: if contrast is lacking, stereoscopy fails. Consequently, topographic information cannot be obtained precisely through photogrammetry for areas with low contrast. Even though more robustness is added in the estimation through multi-view geometry, a precise product is still lacking. For the greater part, interpolation is applied over these regions, where the estimation is constrained by uniqueness, its epipolar line and smoothness. Consequently, digital surface models are generated with an estimate of the topography, without holes but also without an indication of its variance. Every dense matching algorithm is based on a similarity measure. Our methodology uses this property to support the idea that if only noise is present, no correspondence can be detected. Therefore, the noise level is estimated in respect to the intensity signal of the topography (SNR) and this ratio serves as a quality indicator for the automatically generated product. To demonstrate this variance indicator, two different case studies were elaborated. The first study is situated at an open sand mine near the village of Kiezegem, Belgium. Two different UAV systems flew over the site. One system had automatic intensity regulation, and resulted in low contrast over the sandy interior of the mine. That dataset was used to identify the weak estimations of the topography and was compared with the data from the other UAV flight. In the second study a flight campaign with the X100 system was conducted along the coast near Wenduine, Belgium. The obtained images were processed through structure-from-motion software. Although the beach had a very low
Integrating Variances into an Analytical Database
NASA Technical Reports Server (NTRS)
Sanchez, Carlos
2010-01-01
For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.
Some characterizations of unique extremality
NASA Astrophysics Data System (ADS)
Yao, Guowu
2008-07-01
In this paper, it is shown that some necessary characteristic conditions for unique extremality obtained by Zhu and Chen are also sufficient and some sufficient ones by them actually imply that the uniquely extremal Beltrami differentials have a constant modulus. In addition, some local properties of uniquely extremal Beltrami differentials are given.
VPSim: Variance propagation by simulation
Burr, T.; Coulter, C.A.; Prommel, J.
1997-12-01
One of the fundamental concepts in a materials control and accountability system for nuclear safeguards is the materials balance (MB). All transfers into and out of a material balance area are measured, as are the beginning and ending inventories. The resulting MB measures the material loss, MB = T{sub in} + I{sub B} {minus} T{sub out} {minus} I{sub E}. To interpret the MB, the authors must estimate its measurement error standard deviation, {sigma}{sub MB}. When feasible, they use a method usually known as propagation of variance (POV) to estimate {sigma}{sub MB}. The application of POV for estimating the measurement error variance of an MB is straightforward but tedious. By applying POV to individual measurement error standard deviations they can estimate {sigma}{sub MB} (or more generally, they can estimate the variance-covariance matrix, {Sigma}, of a sequence of MBs). This report describes a new computer program (VPSim) that uses simulation to estimate the {Sigma} matrix of a sequence of MBs. Given the proper input data, VPSim calculates the MB and {sigma}{sub MB}, or calculates a sequence of n MBs and the associated n-by-n covariance matrix, {Sigma}. The covariance matrix, {Sigma}, contains the variance of each MB in the diagonal entries and the covariance between pairs of MBs in the off-diagonal entries.
Analysis of Variance: Variably Complex
ERIC Educational Resources Information Center
Drummond, Gordon B.; Vowler, Sarah L.
2012-01-01
These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution…
Variance decomposition in stochastic simulators
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Estimating the Modified Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, Charles
1995-01-01
The third-difference approach to modified Allan variance (MVAR) leads to a tractable formula for a measure of MVAR estimator confidence, the equivalent degrees of freedom (edf), in the presence of power-law phase noise. The effect of estimation stride on edf is tabulated. A simple approximation for edf is given, and its errors are tabulated. A theorem allowing conservative estimates of edf in the presence of compound noise processes is given.
Relating the Hadamard Variance to MCS Kalman Filter Clock Estimation
NASA Technical Reports Server (NTRS)
Hutsell, Steven T.
1996-01-01
The Global Positioning System (GPS) Master Control Station (MCS) currently makes significant use of the Allan Variance. This two-sample variance equation has proven excellent as a handy, understandable tool, both for time domain analysis of GPS cesium frequency standards, and for fine tuning the MCS's state estimation of these atomic clocks. The Allan Variance does not explicitly converge for the nose types of alpha less than or equal to minus 3 and can be greatly affected by frequency drift. Because GPS rubidium frequency standards exhibit non-trivial aging and aging noise characteristics, the basic Allan Variance analysis must be augmented in order to (a) compensate for a dynamic frequency drift, and (b) characterize two additional noise types, specifically alpha = minus 3, and alpha = minus 4. As the GPS program progresses, we will utilize a larger percentage of rubidium frequency standards than ever before. Hence, GPS rubidium clock characterization will require more attention than ever before. The three sample variance, commonly referred to as a renormalized Hadamard Variance, is unaffected by linear frequency drift, converges for alpha is greater than minus 5, and thus has utility for modeling noise in GPS rubidium frequency standards. This paper demonstrates the potential of Hadamard Variance analysis in GPS operations, and presents an equation that relates the Hadamard Variance to the MCS's Kalman filter process noises.
Estimating the Modified Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, Charles
1995-01-01
A paper at the 1992 FCS showed how to express the modified Allan variance (mvar) in terms of the third difference of the cumulative sum of time residuals. Although this reformulated definition was presented merely as a computational trick for simplifying the calculation of mvar estimates, it has since turned out to be a powerful theoretical tool for deriving the statistical quality of those estimates in terms of their equivalent degrees of freedom (edf), defined for an estimator V by edf V = 2(EV)2/(var V). Confidence intervals for mvar can then be constructed from levels of the appropriate 2 distribution.
Berroya, Renato B.; Escano, Fernando B.
1972-01-01
This report deals with a rare complication of disc-valve prosthesis in the mitral area. A significant disc poppet and struts destruction of mitral Beall valve prostheses occurred 20 and 17 months after implantation. The resulting valve incompetence in the first case contributed to the death of the patient. The durability of Teflon prosthetic valves appears to be in question and this type of valve probably will be unacceptable if there is an increasing number of disc-valve variance in the future. Images PMID:5017573
A Wavelet Perspective on the Allan Variance.
Percival, Donald B
2016-04-01
The origins of the Allan variance trace back 50 years ago to two seminal papers, one by Allan (1966) and the other by Barnes (1966). Since then, the Allan variance has played a leading role in the characterization of high-performance time and frequency standards. Wavelets first arose in the early 1980s in the geophysical literature, and the discrete wavelet transform (DWT) became prominent in the late 1980s in the signal processing literature. Flandrin (1992) briefly documented a connection between the Allan variance and a wavelet transform based upon the Haar wavelet. Percival and Guttorp (1994) noted that one popular estimator of the Allan variance-the maximal overlap estimator-can be interpreted in terms of a version of the DWT now widely referred to as the maximal overlap DWT (MODWT). In particular, when the MODWT is based on the Haar wavelet, the variance of the resulting wavelet coefficients-the wavelet variance-is identical to the Allan variance when the latter is multiplied by one-half. The theory behind the wavelet variance can thus deepen our understanding of the Allan variance. In this paper, we review basic wavelet variance theory with an emphasis on the Haar-based wavelet variance and its connection to the Allan variance. We then note that estimation theory for the wavelet variance offers a means of constructing asymptotically correct confidence intervals (CIs) for the Allan variance without reverting to the common practice of specifying a power-law noise type a priori. We also review recent work on specialized estimators of the wavelet variance that are of interest when some observations are missing (gappy data) or in the presence of contamination (rogue observations or outliers). It is a simple matter to adapt these estimators to become estimators of the Allan variance. Finally we note that wavelet variances based upon wavelets other than the Haar offer interesting generalizations of the Allan variance.
Partitioning Predicted Variance into Constituent Parts: A Primer on Regression Commonality Analysis.
ERIC Educational Resources Information Center
Amado, Alfred J.
Commonality analysis is a method of decomposing the R squared in a multiple regression analysis into the proportion of explained variance of the dependent variable associated with each independent variable uniquely and the proportion of explained variance associated with the common effects of one or more independent variables in various…
Warped functional analysis of variance.
Gervini, Daniel; Carter, Patrick A
2014-09-01
This article presents an Analysis of Variance model for functional data that explicitly incorporates phase variability through a time-warping component, allowing for a unified approach to estimation and inference in presence of amplitude and time variability. The focus is on single-random-factor models but the approach can be easily generalized to more complex ANOVA models. The behavior of the estimators is studied by simulation, and an application to the analysis of growth curves of flour beetles is presented. Although the model assumes a smooth latent process behind the observed trajectories, smootheness of the observed data is not required; the method can be applied to irregular time grids, which are common in longitudinal studies.
Speed Variance and Its Influence on Accidents.
ERIC Educational Resources Information Center
Garber, Nicholas J.; Gadirau, Ravi
A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…
Multivariate Granger causality and generalized variance
NASA Astrophysics Data System (ADS)
Barrett, Adam B.; Barnett, Lionel; Seth, Anil K.
2010-04-01
Granger causality analysis is a popular method for inference on directed interactions in complex systems of many variables. A shortcoming of the standard framework for Granger causality is that it only allows for examination of interactions between single (univariate) variables within a system, perhaps conditioned on other variables. However, interactions do not necessarily take place between single variables but may occur among groups or “ensembles” of variables. In this study we establish a principled framework for Granger causality in the context of causal interactions among two or more multivariate sets of variables. Building on Geweke’s seminal 1982 work, we offer additional justifications for one particular form of multivariate Granger causality based on the generalized variances of residual errors. Taken together, our results support a comprehensive and theoretically consistent extension of Granger causality to the multivariate case. Treated individually, they highlight several specific advantages of the generalized variance measure, which we illustrate using applications in neuroscience as an example. We further show how the measure can be used to define “partial” Granger causality in the multivariate context and we also motivate reformulations of “causal density” and “Granger autonomy.” Our results are directly applicable to experimental data and promise to reveal new types of functional relations in complex systems, neural and otherwise.
Multivariate Granger causality and generalized variance.
Barrett, Adam B; Barnett, Lionel; Seth, Anil K
2010-04-01
Granger causality analysis is a popular method for inference on directed interactions in complex systems of many variables. A shortcoming of the standard framework for Granger causality is that it only allows for examination of interactions between single (univariate) variables within a system, perhaps conditioned on other variables. However, interactions do not necessarily take place between single variables but may occur among groups or "ensembles" of variables. In this study we establish a principled framework for Granger causality in the context of causal interactions among two or more multivariate sets of variables. Building on Geweke's seminal 1982 work, we offer additional justifications for one particular form of multivariate Granger causality based on the generalized variances of residual errors. Taken together, our results support a comprehensive and theoretically consistent extension of Granger causality to the multivariate case. Treated individually, they highlight several specific advantages of the generalized variance measure, which we illustrate using applications in neuroscience as an example. We further show how the measure can be used to define "partial" Granger causality in the multivariate context and we also motivate reformulations of "causal density" and "Granger autonomy." Our results are directly applicable to experimental data and promise to reveal new types of functional relations in complex systems, neural and otherwise.
Increasing selection response by Bayesian modeling of heterogeneous environmental variances
Technology Transfer Automated Retrieval System (TEKTRAN)
Heterogeneity of environmental variance among genotypes reduces selection response because genotypes with higher variance are more likely to be selected than low-variance genotypes. Modeling heterogeneous variances to obtain weighted means corrected for heterogeneous variances is difficult in likel...
Restricted sample variance reduces generalizability.
Lakes, Kimberley D
2013-06-01
One factor that affects the reliability of observed scores is restriction of range on the construct measured for a particular group of study participants. This study illustrates how researchers can use generalizability theory to evaluate the impact of restriction of range in particular sample characteristics on the generalizability of test scores and to estimate how changes in measurement design could improve the generalizability of the test scores. An observer-rated measure of child self-regulation (Response to Challenge Scale; Lakes, 2011) is used to examine scores for 198 children (Grades K through 5) within the generalizability theory (GT) framework. The generalizability of ratings within relatively developmentally homogeneous samples is examined and illustrates the effect of reduced variance among ratees on generalizability. Forecasts for g coefficients of various D study designs demonstrate how higher generalizability could be achieved by increasing the number of raters or items. In summary, the research presented illustrates the importance of and procedures for evaluating the generalizability of a set of scores in a particular research context.
Generalized analysis of molecular variance.
Nievergelt, Caroline M; Libiger, Ondrej; Schork, Nicholas J
2007-04-06
Many studies in the fields of genetic epidemiology and applied population genetics are predicated on, or require, an assessment of the genetic background diversity of the individuals chosen for study. A number of strategies have been developed for assessing genetic background diversity. These strategies typically focus on genotype data collected on the individuals in the study, based on a panel of DNA markers. However, many of these strategies are either rooted in cluster analysis techniques, and hence suffer from problems inherent to the assignment of the biological and statistical meaning to resulting clusters, or have formulations that do not permit easy and intuitive extensions. We describe a very general approach to the problem of assessing genetic background diversity that extends the analysis of molecular variance (AMOVA) strategy introduced by Excoffier and colleagues some time ago. As in the original AMOVA strategy, the proposed approach, termed generalized AMOVA (GAMOVA), requires a genetic similarity matrix constructed from the allelic profiles of individuals under study and/or allele frequency summaries of the populations from which the individuals have been sampled. The proposed strategy can be used to either estimate the fraction of genetic variation explained by grouping factors such as country of origin, race, or ethnicity, or to quantify the strength of the relationship of the observed genetic background variation to quantitative measures collected on the subjects, such as blood pressure levels or anthropometric measures. Since the formulation of our test statistic is rooted in multivariate linear models, sets of variables can be related to genetic background in multiple regression-like contexts. GAMOVA can also be used to complement graphical representations of genetic diversity such as tree diagrams (dendrograms) or heatmaps. We examine features, advantages, and power of the proposed procedure and showcase its flexibility by using it to analyze a
Code of Federal Regulations, 2011 CFR
2011-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2012 CFR
2012-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2014 CFR
2014-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2013 CFR
2013-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2010 CFR
2010-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Perspective projection for variance pose face recognition from camera calibration
NASA Astrophysics Data System (ADS)
Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.
2016-04-01
Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.
Variance Design and Air Pollution Control
ERIC Educational Resources Information Center
Ferrar, Terry A.; Brownstein, Alan B.
1975-01-01
Air pollution control authorities were forced to relax air quality standards during the winter of 1972 by granting variances. This paper examines the institutional characteristics of these variance policies from an economic incentive standpoint, sets up desirable structural criteria for institutional design and arrives at policy guidelines for…
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Variances. 1022.16 Section 1022.16 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Variances. 1022.16 Section 1022.16 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Variances. 1022.16 Section 1022.16 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Variances. 1022.16 Section 1022.16 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 4 2013-01-01 2013-01-01 false Variances. 1022.16 Section 1022.16 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
40 CFR 142.41 - Variance request.
Code of Federal Regulations, 2010 CFR
2010-07-01
....41 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.41 Variance request. A supplier of water may request the granting of...
40 CFR 142.41 - Variance request.
Code of Federal Regulations, 2011 CFR
2011-07-01
....41 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.41 Variance request. A supplier of water may request the granting of...
40 CFR 142.41 - Variance request.
Code of Federal Regulations, 2012 CFR
2012-07-01
....41 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.41 Variance request. A supplier of water may request the granting of...
Portfolio optimization with mean-variance model
NASA Astrophysics Data System (ADS)
Hoe, Lam Weng; Siew, Lam Weng
2016-06-01
Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.
Bottleneck Effects on Genetic Variance for Courtship Repertoire
Meffert, L. M.
1995-01-01
Bottleneck effects on evolutionary potential in mating behavior were addressed through assays of additive genetic variances and resulting phenotypic responses to drift in the courtship repertoires of six two-pair founder-flush lines and two control populations of the housefly. A simulation addressed the complication that an estimate of the genetic variance for a courtship trait (e.g., male performance vigor or the female requirement for copulation) must involve assays against the background behavior of the mating partners. The additive ``environmental'' effect of the mating partner's phenotype simply dilutes the net parent-offspring covariance for a trait. However, if there is an interaction with this ``environmental'' component, negative parent-offspring covariances can result under conditions of high incompatibility between the population's distributions for male performance and female choice requirements, despite high levels of genetic variance. All six bottlenecked lines exhibited significant differentiation from the controls in at least one measure of the parent-offspring covariance for male performance or female choice (estimated by 50 parent-son and 50 parent-daughter covariances for 10 courtship traits per line) which translated to significant phenotypic drift. However, the average effect across traits or across lines did not yield a significant net increase in genetic variance due to bottlenecks. Concerted phenotypic differentiation due to the founder-flush event provided indirect evidence of directional dominance in a subset of traits. Furthermore, indirect evidence of genotype-environment interactions (potentially producing genotype-genotype effects) was found in the negative parent-offspring covariances predicted by the male-female interaction simulation and by the association of the magnitude of phenotypic drift with the absolute value of the parent-offspring covariance. Hence, nonadditive genetic effects on mating behavior may be important in
This document provides assistance to those seeking to submit a variance request for LDR treatability variances and determinations of equivalent treatment regarding the hazardous waste land disposal restrictions program.
Genetic Variance for Body Size in a Natural Population of Drosophila Buzzatii
Ruiz, A.; Santos, M.; Barbadilla, A.; Quezada-Diaz, J. E.; Hasson, E.; Fontdevila, A.
1991-01-01
Previous work has shown thorax length to be under directional selection in the Drosophila buzzatii population of Carboneras. In order to predict the genetic consequences of natural selection, genetic variation for this trait was investigated in two ways. First, narrow sense heritability was estimated in the laboratory F(2) generation of a sample of wild flies by means of the offspring-parent regression. A relatively high value, 0.59, was obtained. Because the phenotypic variance of wild flies was 7-9 times that of the flies raised in the laboratory, ``natural'' heritability may be estimated as one-seventh to one-ninth that value. Second, the contribution of the second and fourth chromosomes, which are polymorphic for paracentric inversions, to the genetic variance of thorax length was estimated in the field and in the laboratory. This was done with the assistance of a simple genetic model which shows that the variance among chromosome arrangements and the variance among karyotypes provide minimum estimates of the chromosome's contribution to the additive and genetic variances of the triat, respectively. In males raised under optimal conditions in the laboratory, the variance among second-chromosome karyotypes accounted for 11.43% of the total phenotypic variance and most of this variance was additive; by contrast, the contribution of the fourth chromosome was nonsignificant. The variance among second-chromosome karyotypes accounted for 1.56-1.78% of the total phenotypic variance in wild males and was nonsignificant in wild females. The variance among fourth chromosome karyotypes accounted for 0.14-3.48% of the total phenotypic variance in wild flies. At both chromosomes, the proportion of additive variance was higher in mating flies than in nonmating flies. PMID:1916242
Portfolio optimization using median-variance approach
NASA Astrophysics Data System (ADS)
Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli
2013-04-01
Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.
Neural field theory with variance dynamics.
Robinson, P A
2013-06-01
Previous neural field models have mostly been concerned with prediction of mean neural activity and with second order quantities such as its variance, but without feedback of second order quantities on the dynamics. Here the effects of feedback of the variance on the steady states and adiabatic dynamics of neural systems are calculated using linear neural field theory to estimate the neural voltage variance, then including this quantity in the total variance parameter of the nonlinear firing rate-voltage response function, and thus into determination of the fixed points and the variance itself. The general results further clarify the limits of validity of approaches with and without inclusion of variance dynamics. Specific applications show that stability against a saddle-node bifurcation is reduced in a purely cortical system, but can be either increased or decreased in the corticothalamic case, depending on the initial state. Estimates of critical variance scalings near saddle-node bifurcation are also found, including physiologically based normalizations and new scalings for mean firing rate and the position of the bifurcation.
Variance estimation for stratified propensity score estimators.
Williamson, E J; Morley, R; Lucas, A; Carpenter, J R
2012-07-10
Propensity score methods are increasingly used to estimate the effect of a treatment or exposure on an outcome in non-randomised studies. We focus on one such method, stratification on the propensity score, comparing it with the method of inverse-probability weighting by the propensity score. The propensity score--the conditional probability of receiving the treatment given observed covariates--is usually an unknown probability estimated from the data. Estimators for the variance of treatment effect estimates typically used in practice, however, do not take into account that the propensity score itself has been estimated from the data. By deriving the asymptotic marginal variance of the stratified estimate of treatment effect, correctly taking into account the estimation of the propensity score, we show that routinely used variance estimators are likely to produce confidence intervals that are too conservative when the propensity score model includes variables that predict (cause) the outcome, but only weakly predict the treatment. In contrast, a comparison with the analogous marginal variance for the inverse probability weighted (IPW) estimator shows that routinely used variance estimators for the IPW estimator are likely to produce confidence intervals that are almost always too conservative. Because exact calculation of the asymptotic marginal variance is likely to be complex, particularly for the stratified estimator, we suggest that bootstrap estimates of variance should be used in practice.
Measurement of Allan variance and phase noise at fractions of a millihertz
NASA Technical Reports Server (NTRS)
Conroy, Bruce L.; Le, Duc
1990-01-01
Although the measurement of Allan variance of oscillators is well documented, there is a need for a simplified system for finding the degradation of phase noise and Allan variance step-by-step through a system. This article describes an instrumentation system for simultaneous measurement of additive phase noise and degradation in Allan variance through a transmitter system. Also included are measurements of a 20-kW X-band transmitter showing the effect of adding a pass tube regulator.
Commonality Analysis: A Method of Analyzing Unique and Common Variance Proportions.
ERIC Educational Resources Information Center
Kroff, Michael W.
This paper considers the use of commonality analysis as an effective tool for analyzing relationships between variables in multiple regression or canonical correlational analysis (CCA). The merits of commonality analysis are discussed and the procedure for running commonality analysis is summarized as a four-step process. A heuristic example is…
Reducing variance in batch partitioning measurements
Mariner, Paul E.
2010-08-11
The partitioning experiment is commonly performed with little or no attention to reducing measurement variance. Batch test procedures such as those used to measure K{sub d} values (e.g., ASTM D 4646 and EPA402 -R-99-004A) do not explain how to evaluate measurement uncertainty nor how to minimize measurement variance. In fact, ASTM D 4646 prescribes a sorbent:water ratio that prevents variance minimization. Consequently, the variance of a set of partitioning measurements can be extreme and even absurd. Such data sets, which are commonplace, hamper probabilistic modeling efforts. An error-savvy design requires adjustment of the solution:sorbent ratio so that approximately half of the sorbate partitions to the sorbent. Results of Monte Carlo simulations indicate that this simple step can markedly improve the precision and statistical characterization of partitioning uncertainty.
78 FR 14122 - Revocation of Permanent Variances
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-04
... Occupational Safety and Health Administration Revocation of Permanent Variances AGENCY: Occupational Safety and Health Administration (OSHA), Labor. ACTION: Notice of revocation. SUMMARY: With this notice, OSHA is... into consideration these newly corrected cross references. DATES: The effective date of the...
Code of Federal Regulations, 2014 CFR
2014-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2013 CFR
2013-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2012 CFR
2012-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2010 CFR
2010-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2011 CFR
2011-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
The Genetic Architecture of Quantitative Traits Cannot Be Inferred from Variance Component Analysis
Huang, Wen; Mackay, Trudy F. C.
2016-01-01
Classical quantitative genetic analyses estimate additive and non-additive genetic and environmental components of variance from phenotypes of related individuals without knowing the identities of quantitative trait loci (QTLs). Many studies have found a large proportion of quantitative trait variation can be attributed to the additive genetic variance (VA), providing the basis for claims that non-additive gene actions are unimportant. In this study, we show that arbitrarily defined parameterizations of genetic effects seemingly consistent with non-additive gene actions can also capture the majority of genetic variation. This reveals a logical flaw in using the relative magnitudes of variance components to indicate the relative importance of additive and non-additive gene actions. We discuss the implications and propose that variance component analyses should not be used to infer the genetic architecture of quantitative traits. PMID:27812106
Phonocardiographic diagnosis of aortic ball variance.
Hylen, J C; Kloster, F E; Herr, R H; Hull, P Q; Ames, A W; Starr, A; Griswold, H E
1968-07-01
Fatty infiltration causing changes in the silastic poppet of the Model 1000 series Starr-Edwards aortic valve prostheses (ball variance) has been detected with increasing frequency and can result in sudden death. Phonocardiograms were recorded on 12 patients with ball variance confirmed by operation and of 31 controls. Ten of the 12 patients with ball variance were distinguished from the controls by an aortic opening sound (AO) less than half as intense as the aortic closure sound (AC) at the second right intercostal space (AO/AC ratio less than 0.5). Both AO and AC were decreased in two patients with ball variance, with the loss of the characteristic high frequency and amplitude of these sounds. The only patient having a diminished AO/AC ratio (0.42) without ball variance at reoperation had a clot extending over the aortic valve struts. The phonocardiographic findings have been the most reliable objective evidence of ball variance in patients with Starr-Edwards aortic prosthesis of the Model 1000 series.
Doppler variance imaging for three-dimensional retina and choroid angiography
NASA Astrophysics Data System (ADS)
Yu, Lingfeng; Chen, Zhongping
2010-01-01
We demonstrate the use of Doppler variance (standard deviation) imaging for 3-D in vivo angiography in the human eye. In addition to the regular optical Doppler tomography velocity and structural images, we use the variance of blood flow velocity to map the retina and choroid vessels. Variance imaging is subject to bulk motion artifacts as in phase-resolved Doppler imaging, and a histogram-based method is proposed for bulk-motion correction in variance imaging. Experiments were performed to demonstrate the effectiveness of the proposed method for 3-D vasculature imaging of human retina and choroid.
Methods to Estimate the Between-Study Variance and Its Uncertainty in Meta-Analysis
ERIC Educational Resources Information Center
Veroniki, Areti Angeliki; Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian P. T.; Langan, Dean; Salanti, Georgia
2016-01-01
Meta-analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between-study variability, which is typically modelled using a between-study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between-study variance,…
On Studying Common Factor Variance in Multiple-Component Measuring Instruments
ERIC Educational Resources Information Center
Raykov, Tenko; Pohl, Steffi
2013-01-01
A method for examining common factor variance in multiple-component measuring instruments is outlined. The procedure is based on an application of the latent variable modeling methodology and is concerned with evaluating observed variance explained by a global factor and by one or more additional component-specific factors. The approach furnishes…
Functional analysis of variance for association studies.
Vsevolozhskaya, Olga A; Zaykin, Dmitri V; Greenwood, Mark C; Wei, Changshuai; Lu, Qing
2014-01-01
While progress has been made in identifying common genetic variants associated with human diseases, for most of common complex diseases, the identified genetic variants only account for a small proportion of heritability. Challenges remain in finding additional unknown genetic variants predisposing to complex diseases. With the advance in next-generation sequencing technologies, sequencing studies have become commonplace in genetic research. The ongoing exome-sequencing and whole-genome-sequencing studies generate a massive amount of sequencing variants and allow researchers to comprehensively investigate their role in human diseases. The discovery of new disease-associated variants can be enhanced by utilizing powerful and computationally efficient statistical methods. In this paper, we propose a functional analysis of variance (FANOVA) method for testing an association of sequence variants in a genomic region with a qualitative trait. The FANOVA has a number of advantages: (1) it tests for a joint effect of gene variants, including both common and rare; (2) it fully utilizes linkage disequilibrium and genetic position information; and (3) allows for either protective or risk-increasing causal variants. Through simulations, we show that FANOVA outperform two popularly used methods - SKAT and a previously proposed method based on functional linear models (FLM), - especially if a sample size of a study is small and/or sequence variants have low to moderate effects. We conduct an empirical study by applying three methods (FANOVA, SKAT and FLM) to sequencing data from Dallas Heart Study. While SKAT and FLM respectively detected ANGPTL 4 and ANGPTL 3 associated with obesity, FANOVA was able to identify both genes associated with obesity.
Discrimination of frequency variance for tonal sequences.
Byrne, Andrew J; Viemeister, Neal F; Stellmack, Mark A
2014-12-01
Real-world auditory stimuli are highly variable across occurrences and sources. The present study examined the sensitivity of human listeners to differences in global stimulus variability. In a two-interval, forced-choice task, variance discrimination was measured using sequences of five 100-ms tone pulses. The frequency of each pulse was sampled randomly from a distribution that was Gaussian in logarithmic frequency. In the non-signal interval, the sampled distribution had a variance of σSTAN (2), while in the signal interval, the variance of the sequence was σSIG (2) (with σSIG (2) > σSTAN (2)). The listener's task was to choose the interval with the larger variance. To constrain possible decision strategies, the mean frequency of the sampling distribution of each interval was randomly chosen for each presentation. Psychometric functions were measured for various values of σSTAN (2). Although the performance was remarkably similar across listeners, overall performance was poorer than that of an ideal observer (IO) which perfectly compares interval variances. However, like the IO, Weber's Law behavior was observed, with a constant ratio of ( σSIG (2)- σSTAN (2)) to σSTAN (2) yielding similar performance. A model which degraded the IO with a frequency-resolution noise and a computational noise provided a reasonable fit to the real data.
Variance Decomposition Using an IRT Measurement Model
Glas, Cees A. W.; Boomsma, Dorret I.
2007-01-01
Large scale research projects in behaviour genetics and genetic epidemiology are often based on questionnaire or interview data. Typically, a number of items is presented to a number of subjects, the subjects’ sum scores on the items are computed, and the variance of sum scores is decomposed into a number of variance components. This paper discusses several disadvantages of the approach of analysing sum scores, such as the attenuation of correlations amongst sum scores due to their unreliability. It is shown that the framework of Item Response Theory (IRT) offers a solution to most of these problems. We argue that an IRT approach in combination with Markov chain Monte Carlo (MCMC) estimation provides a flexible and efficient framework for modelling behavioural phenotypes. Next, we use data simulation to illustrate the potentially huge bias in estimating variance components on the basis of sum scores. We then apply the IRT approach with an analysis of attention problems in young adult twins where the variance decomposition model is extended with an IRT measurement model. We show that when estimating an IRT measurement model and a variance decomposition model simultaneously, the estimate for the heritability of attention problems increases from 40% (based on sum scores) to 73%. PMID:17534709
Variance in binary stellar population synthesis
NASA Astrophysics Data System (ADS)
Breivik, Katelyn; Larson, Shane L.
2016-03-01
In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.
A Simple Algorithm for Approximating Confidence on the Modified Allan Variance and the Time Variance
NASA Technical Reports Server (NTRS)
Weiss, Marc A.; Greenhall, Charles A.
1996-01-01
An approximating algorithm for computing equvalent degrees of freedom of the Modified Allan Variance and its square root, the Modified Allan Deviation (MVAR and MDEV), and the Time Variance and Time Deviation (TVAR and TDEV) is presented, along with an algorithm for approximating the inverse chi-square distribution.
Variance of Dispersion Coefficients in Heterogeneous Porous Media
NASA Astrophysics Data System (ADS)
Dentz, Marco; De Barros, Felipe P. J.
2013-04-01
We study the dispersion of a passive solute in heterogeneous porous media using a stochastic modeling approach. Heterogeneity on one hand leads to an increase of solute spreading, which is described by the well-known macrodispersion phenomenon. On the other hand, it induces uncertainty about the dispersion behavior, which is quantified by ensemble averages over suitably defined dispersion coefficients in single medium realizations. We focus here on the sample to sample fluctuations of dispersion coefficients about their ensemble mean values for solutes evolving from point-like and extended source distributions in d = 2 and d = 3 spatial dimensions. The definition of dispersion coefficients in single medium realizations for finite source sizes is not unique, unlike for point-like sources. Thus, we first discuss a series of dispersion measures, which describe the extension of the solute plume, as well as dispersion measures that quantify the solute dispersion relative to the injection point. The sample to sample fluctuations of these observables are quantified in terms of the variance with respect to their ensemble averages. We find that the ensemble averages of these dispersion measures may be identical, their fluctuation behavior, however, may be very different. This is quantified using perturbation expansions in the fluctuations of the random flow field. We derive explicit expressions for the time evolution of the variance of the dispersion coefficients. The characteristic time scale for the variance evolution is given by the typical dispersion time over the characteristic heterogeneity scale and the dimensions of the source. We find that the dispersion variances asymptotically decrease to zero in d = 3 dimensions, which means, the dispersion coefficients are self-averaging observables, at least for moderate heterogeneity. In d = 2 dimensions, the variance converges towards a finite asymptotic value that is independent of the source distribution. Dispersion is not
Testing Interaction Effects without Discarding Variance.
ERIC Educational Resources Information Center
Lopez, Kay A.
Analysis of variance (ANOVA) and multiple regression are two of the most commonly used methods of data analysis in behavioral science research. Although ANOVA was intended for use with experimental designs, educational researchers have used ANOVA extensively in aptitude-treatment interaction (ATI) research. This practice tends to make researchers…
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 7 2010-07-01 2010-07-01 false Variances. 1920.2 Section 1920.2 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR (CONTINUED) PROCEDURE FOR VARIATIONS FROM SAFETY AND HEALTH REGULATIONS UNDER THE LONGSHOREMEN'S AND HARBOR...
Code of Federal Regulations, 2011 CFR
2011-04-01
... Dockets Management, except for information regarded as confidential under section 537(e) of the act. (d... Management (HFA-305), Food and Drug Administration, 5630 Fishers Lane, rm. 1061, Rockville, MD 20852. (1) The application for variance shall include the following information: (i) A description of the product and...
Formative Use of Intuitive Analysis of Variance
ERIC Educational Resources Information Center
Trumpower, David L.
2013-01-01
Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In both…
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Variances. 1021.343 Section 1021.343 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) NATIONAL ENVIRONMENTAL POLICY ACT IMPLEMENTING PROCEDURES Implementing... arrangements for emergency actions having significant environmental impacts. DOE shall document,...
Code of Federal Regulations, 2010 CFR
2010-04-01
... the study was conducted in compliance with the good laboratory practice regulations set forth in part... application for variance shall include the following information: (i) A description of the product and its... equipment, the proposed location of each unit. (viii) Such other information required by regulation or...
Parameterization of Incident and Infragravity Swash Variance
NASA Astrophysics Data System (ADS)
Stockdon, H. F.; Holman, R. A.; Sallenger, A. H.
2002-12-01
By clearly defining the forcing and morphologic controls of swash variance in both the incident and infragravity frequency bands, we are able to derive a more complete parameterization for extreme runup that may be applicable to a wide range of beach and wave conditions. It is expected that the dynamics of the incident and infragravity bands will have different dependencies on offshore wave conditions and local beach slopes. For example, previous studies have shown that swash variance in the incident band depends on foreshore beach slope while the infragravity variance depends more on a weighted mean slope across the surf zone. Because the physics of each band is parameterized differently, the amount that each frequency band contributes to the total swash variance will vary from site to site and, often, at a single site as the profile configuration changes over time. Using water level time series (measured at the shoreline) collected during nine dynamically different field experiments, we test the expected behavior of both incident and infragravity swash and the contribution each makes to total variance. At the dissipative sites (Iribarren number, \\xi0, <0.3) located in Oregon and the Netherlands, the incident band swash is saturated with respect to offshore wave height. Conversely, on the intermediate and reflective beaches, the amplitudes of both incident and infragravity swash variance grow with increasing offshore wave height. While infragravity band swash at all sites appears to increase linearly with offshore wave height, the magnitudes of the response are somewhat greater on reflective beaches than on dissipative beaches. This means that for the same offshore wave conditions the swash on a steeper foreshore will be larger than that on a more gently sloping foreshore. The potential control of the surf zone slope on infragravity band swash is examined at Duck, North Carolina, (0.3 < \\xi0 < 4.0), where significant differences in the relationship between swash
42 CFR 456.525 - Request for renewal of variance.
Code of Federal Regulations, 2010 CFR
2010-10-01
... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from...
42 CFR 456.525 - Request for renewal of variance.
Code of Federal Regulations, 2011 CFR
2011-10-01
... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from...
42 CFR 456.521 - Conditions for granting variance requests.
Code of Federal Regulations, 2010 CFR
2010-10-01
... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from...
42 CFR 456.521 - Conditions for granting variance requests.
Code of Federal Regulations, 2011 CFR
2011-10-01
... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from...
GR uniqueness and deformations
NASA Astrophysics Data System (ADS)
Krasnov, Kirill
2015-10-01
In the metric formulation gravitons are described with the parity symmetric S + 2 ⊗ S - 2 representation of Lorentz group. General Relativity is then the unique theory of interacting gravitons with second order field equations. We show that if a chiral S + 3 ⊗ S - representation is used instead, the uniqueness is lost, and there is an infinite-parametric family of theories of interacting gravitons with second order field equations. We use the language of graviton scattering amplitudes, and show how the uniqueness of GR is avoided using simple dimensional analysis. The resulting distinct from GR gravity theories are all parity asymmetric, but share the GR MHV amplitudes. They have new all same helicity graviton scattering amplitudes at every graviton order. The amplitudes with at least one graviton of opposite helicity continue to be determinable by the BCFW recursion.
Modality-Driven Classification and Visualization of Ensemble Variance
Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.
2016-10-01
Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.
Selection and genetic (co)variance in bighorn sheep.
Coltman, David W; O'Donoghue, Paul; Hogg, John T; Festa-Bianchet, Marco
2005-06-01
Genetic theory predicts that directional selection should deplete additive genetic variance for traits closely related to fitness, and may favor the maintenance of alleles with antagonistically pleiotropic effects on fitness-related traits. Trait heritability is therefore expected to decline with the degree of association with fitness, and some genetic correlations between selected traits are expected to be negative. Here we demonstrate a negative relationship between trait heritability and association with lifetime reproductive success in a wild population of bighorn sheep (Ovis canadensis) at Ram Mountain, Alberta, Canada. Lower heritability for fitness-related traits, however, was not wholly a consequence of declining genetic variance, because those traits showed high levels of residual variance. Genetic correlations estimated between pairs of traits with significant heritability were positive. Principal component analyses suggest that positive relationships between morphometric traits constitute the main axis of genetic variation. Trade-offs in the form of negative genetic or phenotypic correlations among the traits we have measured do not appear to constrain the potential for evolution in this population.
Dynamic Programming Using Polar Variance for Image Segmentation.
Rosado-Toro, Jose A; Altbach, Maria I; Rodriguez, Jeffrey J
2016-10-06
When using polar dynamic programming (PDP) for image segmentation, the object size is one of the main features used. This is because if size is left unconstrained the final segmentation may include high-gradient regions that are not associated with the object. In this paper, we propose a new feature, polar variance, which allows the algorithm to segment objects of different sizes without the need for training data. The polar variance is the variance in a polar region between a user-selected origin and a pixel we want to analyze. We also incorporate a new technique that allows PDP to segment complex shapes by finding low-gradient regions and growing them. The experimental analysis consisted on comparing our technique with different active contour segmentation techniques on a series of tests. The tests consisted on robustness to additive Gaussian noise, segmentation accuracy with different grayscale images and finally robustness to algorithm-specific parameters. Experimental results show that our technique performs favorably when compared to other segmentation techniques.
Replica approach to mean-variance portfolio optimization
NASA Astrophysics Data System (ADS)
Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre
2016-12-01
We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r = N/T < 1, where N is the dimension of the portfolio and T the length of the time series used to estimate the covariance matrix. At the critical point r = 1 a phase transition is taking place. The out of sample estimation error blows up at this point as 1/(1 - r), independently of the covariance matrix or the expected return, displaying the universality not only of the critical exponent, but also the critical point. As a conspicuous illustration of the dangers of in-sample estimates, the optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.
Creativity and technical innovation: spatial ability's unique role.
Kell, Harrison J; Lubinski, David; Benbow, Camilla P; Steiger, James H
2013-09-01
In the late 1970s, 563 intellectually talented 13-year-olds (identified by the SAT as in the top 0.5% of ability) were assessed on spatial ability. More than 30 years later, the present study evaluated whether spatial ability provided incremental validity (beyond the SAT's mathematical and verbal reasoning subtests) for differentially predicting which of these individuals had patents and three classes of refereed publications. A two-step discriminant-function analysis revealed that the SAT subtests jointly accounted for 10.8% of the variance among these outcomes (p < .01); when spatial ability was added, an additional 7.6% was accounted for--a statistically significant increase (p < .01). The findings indicate that spatial ability has a unique role in the development of creativity, beyond the roles played by the abilities traditionally measured in educational selection, counseling, and industrial-organizational psychology. Spatial ability plays a key and unique role in structuring many important psychological phenomena and should be examined more broadly across the applied and basic psychological sciences.
Analysis of variance of microarray data.
Ayroles, Julien F; Gibson, Greg
2006-01-01
Analysis of variance (ANOVA) is an approach used to identify differentially expressed genes in complex experimental designs. It is based on testing for the significance of the magnitude of effect of two or more treatments taking into account the variance within and between treatment classes. ANOVA is a highly flexible analytical approach that allows investigators to simultaneously assess the contributions of multiple factors to gene expression variation, including technical (dye, batch) effects and biological (sex, genotype, drug, time) ones, as well as interactions between factors. This chapter provides an overview of the theory of linear mixture modeling and the sequence of steps involved in fitting gene-specific models and discusses essential features of experimental design. Commercial and open-source software for performing ANOVA is widely available.
Analysis of Variance of Multiply Imputed Data.
van Ginkel, Joost R; Kroonenberg, Pieter M
2014-01-01
As a procedure for handling missing data, Multiple imputation consists of estimating the missing data multiple times to create several complete versions of an incomplete data set. All these data sets are analyzed by the same statistical procedure, and the results are pooled for interpretation. So far, no explicit rules for pooling F-tests of (repeated-measures) analysis of variance have been defined. In this paper we outline the appropriate procedure for the results of analysis of variance for multiply imputed data sets. It involves both reformulation of the ANOVA model as a regression model using effect coding of the predictors and applying already existing combination rules for regression models. The proposed procedure is illustrated using three example data sets. The pooled results of these three examples provide plausible F- and p-values.
ERIC Educational Resources Information Center
Goble, Don
2009-01-01
This article describes the many learning opportunities that broadcast technology students at Ladue Horton Watkins High School in St. Louis, Missouri, experience because of their unique access to technology and methods of learning. Through scaffolding, stepladder techniques, and trial by fire, students learn to produce multiple television programs,…
Karmakar, Bibha; Malkin, Ida; Kobyliansky, Eugene
2012-01-01
Dermatoglyphic traits in a sample of twins were analyzed to estimate the resemblance between MZ and DZ twins and to evaluate the mode of inheritance by using the maximum likelihood-based Variance decomposition analysis. The additive genetic variance component was significant in both sexes for four traits--PII, AB_RC, RC_HB, and ATD_L. AB RC and RC_HB had significant sex differences in means, whereas PII and ATD_L did not. The results of the Bivariate Variance decomposition analysis revealed that PII and RC_HB have a significant correlation in both genetic and residual components. Significant correlation in the additive genetic variance between AB_RC and ATD_L was observed. The same analysis only for the females sub-sample in the three traits RBL, RBR and AB_DIS shows that the additive genetic RBR component was significant and the AB_DIS sibling component was not significant while others cannot be constrained to zero. The additive variance for AB DIS sibling component was not significant. The three components additive, sibling and residual were significantly correlated between each pair of traits revealed by the Bivariate Variance decomposition analysis.
Directional variance analysis of annual rings
NASA Astrophysics Data System (ADS)
Kumpulainen, P.; Marjanen, K.
2010-07-01
The wood quality measurement methods are of increasing importance in the wood industry. The goal is to produce more high quality products with higher marketing value than is produced today. One of the key factors for increasing the market value is to provide better measurements for increased information to support the decisions made later in the product chain. Strength and stiffness are important properties of the wood. They are related to mean annual ring width and its deviation. These indicators can be estimated from images taken from the log ends by two-dimensional power spectrum analysis. The spectrum analysis has been used successfully for images of pine. However, the annual rings in birch, for example are less distinguishable and the basic spectrum analysis method does not give reliable results. A novel method for local log end variance analysis based on Radon-transform is proposed. The directions and the positions of the annual rings can be estimated from local minimum and maximum variance estimates. Applying the spectrum analysis on the maximum local variance estimate instead of the original image produces more reliable estimate of the annual ring width. The proposed method is not limited to log end analysis only. It is usable in other two-dimensional random signal and texture analysis tasks.
Variance and skewness in the FIRST survey
NASA Astrophysics Data System (ADS)
Magliocchetti, M.; Maddox, S. J.; Lahav, O.; Wall, J. V.
1998-10-01
We investigate the large-scale clustering of radio sources in the FIRST 1.4-GHz survey by analysing the distribution function (counts in cells). We select a reliable sample from the the FIRST catalogue, paying particular attention to the problem of how to define single radio sources from the multiple components listed. We also consider the incompleteness of the catalogue. We estimate the angular two-point correlation function w(theta), the variance Psi_2 and skewness Psi_3 of the distribution for the various subsamples chosen on different criteria. Both w(theta) and Psi_2 show power-law behaviour with an amplitude corresponding to a spatial correlation length of r_0~10h^-1Mpc. We detect significant skewness in the distribution, the first such detection in radio surveys. This skewness is found to be related to the variance through Psi_3=S_3(Psi_2)^alpha, with alpha=1.9+/-0.1, consistent with the non-linear gravitational growth of perturbations from primordial Gaussian initial conditions. We show that the amplitude of variance and the skewness are consistent with realistic models of galaxy clustering.
Hypothesis exploration with visualization of variance
2014-01-01
Background The Consortium for Neuropsychiatric Phenomics (CNP) at UCLA was an investigation into the biological bases of traits such as memory and response inhibition phenotypes—to explore whether they are linked to syndromes including ADHD, Bipolar disorder, and Schizophrenia. An aim of the consortium was in moving from traditional categorical approaches for psychiatric syndromes towards more quantitative approaches based on large-scale analysis of the space of human variation. It represented an application of phenomics—wide-scale, systematic study of phenotypes—to neuropsychiatry research. Results This paper reports on a system for exploration of hypotheses in data obtained from the LA2K, LA3C, and LA5C studies in CNP. ViVA is a system for exploratory data analysis using novel mathematical models and methods for visualization of variance. An example of these methods is called VISOVA, a combination of visualization and analysis of variance, with the flavor of exploration associated with ANOVA in biomedical hypothesis generation. It permits visual identification of phenotype profiles—patterns of values across phenotypes—that characterize groups. Visualization enables screening and refinement of hypotheses about variance structure of sets of phenotypes. Conclusions The ViVA system was designed for exploration of neuropsychiatric hypotheses by interdisciplinary teams. Automated visualization in ViVA supports ‘natural selection’ on a pool of hypotheses, and permits deeper understanding of the statistical architecture of the data. Large-scale perspective of this kind could lead to better neuropsychiatric diagnostics. PMID:25097666
Genetic and environmental heterogeneity of residual variance of weight traits in Nellore beef cattle
2012-01-01
Background Many studies have provided evidence of the existence of genetic heterogeneity of environmental variance, suggesting that it could be exploited to improve robustness and uniformity of livestock by selection. However, little is known about the perspectives of such a selection strategy in beef cattle. Methods A two-step approach was applied to study the genetic heterogeneity of residual variance of weight gain from birth to weaning and long-yearling weight in a Nellore beef cattle population. First, an animal model was fitted to the data and second, the influence of additive and environmental effects on the residual variance of these traits was investigated with different models, in which the log squared estimated residuals for each phenotypic record were analyzed using the restricted maximum likelihood method. Monte Carlo simulation was performed to assess the reliability of variance component estimates from the second step and the accuracy of estimated breeding values for residual variation. Results The results suggest that both genetic and environmental factors have an effect on the residual variance of weight gain from birth to weaning and long-yearling in Nellore beef cattle and that uniformity of these traits could be improved by selecting for lower residual variance, when considering a large amount of information to predict genetic merit for this criterion. Simulations suggested that using the two-step approach would lead to biased estimates of variance components, such that more adequate methods are needed to study the genetic heterogeneity of residual variance in beef cattle. PMID:22672564
Variance Reduction Factor of Nuclear Data for Integral Neutronics Parameters
Chiba, G. Tsuji, M.; Narabayashi, T.
2015-01-15
We propose a new quantity, a variance reduction factor, to identify nuclear data for which further improvements are required to reduce uncertainties of target integral neutronics parameters. Important energy ranges can be also identified with this variance reduction factor. Variance reduction factors are calculated for several integral neutronics parameters. The usefulness of the variance reduction factors is demonstrated.
Applications of non-parametric statistics and analysis of variance on sample variances
NASA Technical Reports Server (NTRS)
Myers, R. H.
1981-01-01
Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.
Minimum variance and variance of outgoing quality limit MDS-1(c1, c2) plans
NASA Astrophysics Data System (ADS)
Raju, C.; Vidya, R.
2016-06-01
In this article, the outgoing quality (OQ) and total inspection (TI) of multiple deferred state sampling plans MDS-1(c1,c2) are studied. It is assumed that the inspection is rejection rectification. Procedures for designing MDS-1(c1,c2) sampling plans with minimum variance of OQ and TI are developed. A procedure for obtaining a plan for a designated upper limit for the variance of the OQ (VOQL) is outlined.
Event segmentation ability uniquely predicts event memory.
Sargent, Jesse Q; Zacks, Jeffrey M; Hambrick, David Z; Zacks, Rose T; Kurby, Christopher A; Bailey, Heather R; Eisenberg, Michelle L; Beck, Taylor M
2013-11-01
Memory for everyday events plays a central role in tasks of daily living, autobiographical memory, and planning. Event memory depends in part on segmenting ongoing activity into meaningful units. This study examined the relationship between event segmentation and memory in a lifespan sample to answer the following question: Is the ability to segment activity into meaningful events a unique predictor of subsequent memory, or is the relationship between event perception and memory accounted for by general cognitive abilities? Two hundred and eight adults ranging from 20 to 79years old segmented movies of everyday events and attempted to remember the events afterwards. They also completed psychometric ability tests and tests measuring script knowledge for everyday events. Event segmentation and script knowledge both explained unique variance in event memory above and beyond the psychometric measures, and did so as strongly in older as in younger adults. These results suggest that event segmentation is a basic cognitive mechanism, important for memory across the lifespan.
Uniquely human social cognition.
Saxe, Rebecca
2006-04-01
Recent data identify distinct components of social cognition associated with five brain regions. In posterior temporal cortex, the extrastriate body area is associated with perceiving the form of other human bodies. A nearby region in the posterior superior temporal sulcus is involved in interpreting the motions of a human body in terms of goals. A distinct region at the temporo-parietal junction supports the uniquely human ability to reason about the contents of mental states. Medial prefrontal cortex is divided into at least two subregions. Ventral medial prefrontal cortex is implicated in emotional empathy, whereas dorsal medial prefrontal cortex is implicated in the uniquely human representation of triadic relations between two minds and an object, supporting shared attention and collaborative goals.
Age-dependent genetic variance in a life-history trait in the mute swan.
Charmantier, Anne; Perrins, Christopher; McCleery, Robin H; Sheldon, Ben C
2006-01-22
Genetic variance in characters under natural selection in natural populations determines the way those populations respond to that selection. Whether populations show temporal and/or spatial constancy in patterns of genetic variance and covariance is regularly considered, as this will determine whether selection responses are constant over space and time. Much less often considered is whether characters show differing amounts of genetic variance over the life-history of individuals. Such age-specific variation, if present, has important potential consequences for the force of natural selection and for understanding the causes of variation in quantitative characters. Using data from a long-term study of the mute swan Cygnus olor, we report the partitioning of phenotypic variance in timing of breeding (subject to strong natural selection) into component parts over 12 different age classes. We show that the additive genetic variance and heritability of this trait are strongly age-dependent, with higher additive genetic variance present in young and, particularly, old birds, but little evidence of any genetic variance for birds of intermediate ages. These results demonstrate that age can have a very important influence on the components of variation of characters in natural populations, and consequently that separate age classes cannot be assumed to be equivalent, either with respect to their evolutionary potential or response.
FMRI group analysis combining effect estimates and their variances
Chen, Gang; Saad, Ziad S.; Nath, Audrey R.; Beauchamp, Michael S.; Cox, Robert W.
2012-01-01
Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an approach that does not make such strong assumptions, and present a computationally efficient frequentist approach to FMRI group analysis, which we term mixed-effects multilevel analysis (MEMA), that incorporates both the variability across subjects and the precision estimate of each effect of interest from individual subject analyses. On average, the more accurate tests result in higher statistical power, especially when conventional variance assumptions do not hold, or in the presence of outliers. In addition, various heterogeneity measures are available with MEMA that may assist the investigator in further improving the modeling. Our method allows group effect t-tests and comparisons among conditions and among groups. In addition, it has the capability to incorporate subject-specific covariates such as age, IQ, or behavioral data. Simulations were performed to illustrate power comparisons and the capability of controlling type I errors among various significance testing methods, and the results indicated that the testing statistic we adopted struck a good balance between power gain and type I error control. Our approach is instantiated in an open-source, freely distributed program that may be used on any dataset stored in the universal neuroimaging file transfer (NIfTI) format. To date, the main impediment for more accurate testing that incorporates both within- and cross-subject variability has been the high computational cost. Our efficient implementation makes this approach
Visual SLAM Using Variance Grid Maps
NASA Technical Reports Server (NTRS)
Howard, Andrew B.; Marks, Tim K.
2011-01-01
An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance
The defect variance of random spherical harmonics
NASA Astrophysics Data System (ADS)
Marinucci, Domenico; Wigman, Igor
2011-09-01
The defect of a function f:M\\rightarrow {R} is defined as the difference between the measure of the positive and negative regions. In this paper, we begin the analysis of the distribution of defect of random Gaussian spherical harmonics. By an easy argument, the defect is non-trivial only for even degree and the expected value always vanishes. Our principal result is evaluating the defect variance, asymptotically in the high-frequency limit. As other geometric functionals of random eigenfunctions, the defect may be used as a tool to probe the statistical properties of spherical random fields, a topic of great interest for modern cosmological data analysis.
NASA's unique networking environment
NASA Technical Reports Server (NTRS)
Johnson, Marjory J.
1988-01-01
Networking is an infrastructure technology; it is a tool for NASA to support its space and aeronautics missions. Some of NASA's networking problems are shared by the commercial and/or military communities, and can be solved by working with these communities. However, some of NASA's networking problems are unique and will not be addressed by these other communities. Individual characteristics of NASA's space-mission networking enviroment are examined, the combination of all these characteristics that distinguish NASA's networking systems from either commercial or military systems is explained, and some research areas that are important for NASA to pursue are outlined.
River meanders - Theory of minimum variance
Langbein, Walter Basil; Leopold, Luna Bergere
1966-01-01
Meanders are the result of erosion-deposition processes tending toward the most stable form in which the variability of certain essential properties is minimized. This minimization involves the adjustment of the planimetric geometry and the hydraulic factors of depth, velocity, and local slope.The planimetric geometry of a meander is that of a random walk whose most frequent form minimizes the sum of the squares of the changes in direction in each successive unit length. The direction angles are then sine functions of channel distance. This yields a meander shape typically present in meandering rivers and has the characteristic that the ratio of meander length to average radius of curvature in the bend is 4.7.Depth, velocity, and slope are shown by field observations to be adjusted so as to decrease the variance of shear and the friction factor in a meander curve over that in an otherwise comparable straight reach of the same riverSince theory and observation indicate meanders achieve the minimum variance postulated, it follows that for channels in which alternating pools and riffles occur, meandering is the most probable form of channel geometry and thus is more stable geometry than a straight or nonmeandering alinement.
Variance and Skewness in the FIRST Survey
NASA Astrophysics Data System (ADS)
Magliocchetti, M.; Maddox, S. J.; Lahav, O.; Wall, J. V.
We investigate the large-scale clustering of radio sources by analysing the distribution function of the FIRST 1.4 GHz survey. We select a reliable galaxy sample from the FIRST catalogue, paying particular attention to the definition of single radio sources from the multiple components listed in the FIRST catalogue. We estimate the variance, Ψ2, and skewness, Ψ3, of the distribution function for the best galaxy subsample. Ψ2 shows power-law behaviour as a function of cell size, with an amplitude corresponding a spatial correlation length of r0 ~10 h-1 Mpc. We detect significant skewness in the distribution, and find that it is related to the variance through the relation Ψ3 = S3 (Ψ2)α with α = 1.9 +/- 0.1 consistent with the non-linear growth of perturbations from primordial Gaussian initial conditions. We show that the amplitude of clustering (corresponding to a spatial correlation length of r0 ~10 h-1 Mpc) and skewness are consistent with realistic models of galaxy clustering.
Hybrid biasing approaches for global variance reduction.
Wu, Zeyun; Abdel-Khalik, Hany S
2013-02-01
A new variant of Monte Carlo-deterministic (DT) hybrid variance reduction approach based on Gaussian process theory is presented for accelerating convergence of Monte Carlo simulation and compared with Forward-Weighted Consistent Adjoint Driven Importance Sampling (FW-CADIS) approach implemented in the SCALE package from Oak Ridge National Laboratory. The new approach, denoted the Gaussian process approach, treats the responses of interest as normally distributed random processes. The Gaussian process approach improves the selection of the weight windows of simulated particles by identifying a subspace that captures the dominant sources of statistical response variations. Like the FW-CADIS approach, the Gaussian process approach utilizes particle importance maps obtained from deterministic adjoint models to derive weight window biasing. In contrast to the FW-CADIS approach, the Gaussian process approach identifies the response correlations (via a covariance matrix) and employs them to reduce the computational overhead required for global variance reduction (GVR) purpose. The effective rank of the covariance matrix identifies the minimum number of uncorrelated pseudo responses, which are employed to bias simulated particles. Numerical experiments, serving as a proof of principle, are presented to compare the Gaussian process and FW-CADIS approaches in terms of the global reduction in standard deviation of the estimated responses.
Abel, David L.
2011-01-01
Is life physicochemically unique? No. Is life unique? Yes. Life manifests innumerable formalisms that cannot be generated or explained by physicodynamics alone. Life pursues thousands of biofunctional goals, not the least of which is staying alive. Neither physicodynamics, nor evolution, pursue goals. Life is largely directed by linear digital programming and by the Prescriptive Information (PI) instantiated particularly into physicodynamically indeterminate nucleotide sequencing. Epigenomic controls only compound the sophistication of these formalisms. Life employs representationalism through the use of symbol systems. Life manifests autonomy, homeostasis far from equilibrium in the harshest of environments, positive and negative feedback mechanisms, prevention and correction of its own errors, and organization of its components into Sustained Functional Systems (SFS). Chance and necessity—heat agitation and the cause-and-effect determinism of nature’s orderliness—cannot spawn formalisms such as mathematics, language, symbol systems, coding, decoding, logic, organization (not to be confused with mere self-ordering), integration of circuits, computational success, and the pursuit of functionality. All of these characteristics of life are formal, not physical. PMID:25382119
Clarke, Peter; Varghese, Philip; Goldstein, David
2014-12-09
We extend a variance reduced discrete velocity method developed at UT Austin [1, 2] to gas mixtures with large mass ratios and flows with trace species. The mixture is stored as a collection of independent velocity distribution functions, each with a unique grid in velocity space. Different collision types (A-A, A-B, B-B, etc.) are treated independently, and the variance reduction scheme is formulated with different equilibrium functions for each separate collision type. The individual treatment of species enables increased focus on species important to the physics of the flow, even if the important species are present in trace amounts. The method is verified through comparisons to Direct Simulation Monte Carlo computations and the computational workload per time step is investigated for the variance reduced method.
An Empirical Temperature Variance Source Model in Heated Jets
NASA Technical Reports Server (NTRS)
Khavaran, Abbas; Bridges, James
2012-01-01
An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.
40 CFR 59.509 - Can I get a variance?
Code of Federal Regulations, 2012 CFR
2012-07-01
... its application is complete. (d) The Administrator will issue a variance if the criteria specified in... entity will achieve compliance with this subpart. (f) A variance will cease to be effective upon...
40 CFR 59.509 - Can I get a variance?
Code of Federal Regulations, 2014 CFR
2014-07-01
... its application is complete. (d) The Administrator will issue a variance if the criteria specified in... entity will achieve compliance with this subpart. (f) A variance will cease to be effective upon...
Applications of Variance Fractal Dimension: a Survey
NASA Astrophysics Data System (ADS)
Phinyomark, Angkoon; Phukpattaranont, Pornchai; Limsakul, Chusak
2012-04-01
Chaotic dynamical systems are pervasive in nature and can be shown to be deterministic through fractal analysis. There are numerous methods that can be used to estimate the fractal dimension. Among the usual fractal estimation methods, variance fractal dimension (VFD) is one of the most significant fractal analysis methods that can be implemented for real-time systems. The basic concept and theory of VFD are presented. Recent research and the development of several applications based on VFD are reviewed and explained in detail, such as biomedical signal processing and pattern recognition, speech communication, geophysical signal analysis, power systems and communication systems. The important parameters that need to be considered in computing the VFD are discussed, including the window size and the window increment of the feature, and the step size of the VFD. Directions for future research of VFD are also briefly outlined.
Revell, L J; Mahler, D L; Sweeney, J R; Sobotka, M; Fancher, V E; Losos, J B
2010-02-01
The pattern of genetic variances and covariances among characters, summarized in the additive genetic variance-covariance matrix, G, determines how a population will respond to linear natural selection. However, G itself also evolves in response to selection. In particular, we expect that, over time, G will evolve correspondence with the pattern of multivariate nonlinear natural selection. In this study, we substitute the phenotypic variance-covariance matrix (P) for G to determine if the pattern of multivariate nonlinear selection in a natural population of Anolis cristatellus, an arboreal lizard from Puerto Rico, has influenced the evolution of genetic variances and covariances in this species. Although results varied among our estimates of P and fitness, and among our analytic techniques, we find significant evidence for congruence between nonlinear selection and P, suggesting that natural selection may have influenced the evolution of genetic constraint in this species.
Considering Oil Production Variance as an Indicator of Peak Production
2010-06-07
Acquisition Cost ( IRAC ) Oil Prices. Source: Data used to construct graph acquired from the EIA (http://tonto.eia.doe.gov/country/timeline/oil_chronology.cfm...Acquisition Cost ( IRAC ). Production vs. Price – Variance Comparison Oil production variance and oil price variance have never been so far
A New Nonparametric Levene Test for Equal Variances
ERIC Educational Resources Information Center
Nordstokke, David W.; Zumbo, Bruno D.
2010-01-01
Tests of the equality of variances are sometimes used on their own to compare variability across groups of experimental or non-experimental conditions but they are most often used alongside other methods to support assumptions made about variances. A new nonparametric test of equality of variances is described and compared to current "gold…
Argentine Population Genetic Structure: Large Variance in Amerindian Contribution
Seldin, Michael F.; Tian, Chao; Shigeta, Russell; Scherbarth, Hugo R.; Silva, Gabriel; Belmont, John W.; Kittles, Rick; Gamron, Susana; Allevi, Alberto; Palatnik, Simon A.; Alvarellos, Alejandro; Paira, Sergio; Caprarulo, Cesar; Guillerón, Carolina; Catoggio, Luis J.; Prigione, Cristina; Berbotto, Guillermo A.; García, Mercedes A.; Perandones, Carlos E.; Pons-Estel, Bernardo A.; Alarcon-Riquelme, Marta E.
2011-01-01
Argentine population genetic structure was examined using a set of 78 ancestry informative markers (AIMs) to assess the contributions of European, Amerindian, and African ancestry in 94 individuals members of this population. Using the Bayesian clustering algorithm STRUCTURE, the mean European contribution was 78%, the Amerindian contribution was 19.4%, and the African contribution was 2.5%. Similar results were found using weighted least mean square method: European, 80.2%; Amerindian, 18.1%; and African, 1.7%. Consistent with previous studies the current results showed very few individuals (four of 94) with greater than 10% African admixture. Notably, when individual admixture was examined, the Amerindian and European admixture showed a very large variance and individual Amerindian contribution ranged from 1.5 to 84.5% in the 94 individual Argentine subjects. These results indicate that admixture must be considered when clinical epidemiology or case control genetic analyses are studied in this population. Moreover, the current study provides a set of informative SNPs that can be used to ascertain or control for this potentially hidden stratification. In addition, the large variance in admixture proportions in individual Argentine subjects shown by this study suggests that this population is appropriate for future admixture mapping studies. PMID:17177183
Understanding the Unique Equatorial Density Irregularities
2015-04-01
monitoring devices. In addition, the Low Earth Orbiting (LEO) satellites ion density observations show unique features for the African sector [Hei et al. 2005...installed in Africa [Amory-Mazaudier, et al. 2009] since 2007. Alongside this activity, universities in Africa (e.g. Bahir Dar Uni- versity, Ethiopia...African sector, show unique equatorial iono- spheric structure [Hei et al. 2005]. For example, this region equatorial plasma bubbles, which produce
Cyclostationary analysis with logarithmic variance stabilisation
NASA Astrophysics Data System (ADS)
Borghesani, Pietro; Shahriar, Md Rifat
2016-03-01
Second order cyclostationary (CS2) components in vibration or acoustic emission signals are typical symptoms of a wide variety of faults in rotating and alternating mechanical systems. The square envelope spectrum (SES), obtained via Hilbert transform of the original signal, is at the basis of the most common indicators used for detection of CS2 components. It has been shown that the SES is equivalent to an autocorrelation of the signal's discrete Fourier transform, and that CS2 components are a cause of high correlations in the frequency domain of the signal, thus resulting in peaks in the SES. Statistical tests have been proposed to determine if peaks in the SES are likely to belong to a normal variability in the signal or if they are proper symptoms of CS2 components. Despite the need for automated fault recognition and the theoretical soundness of these tests, this approach to machine diagnostics has been mostly neglected in industrial applications. In fact, in a series of experimental applications, even with proper pre-whitening steps, it has been found that healthy machines might produce high spectral correlations and therefore result in a highly biased SES distribution which might cause a series of false positives. In this paper a new envelope spectrum is defined, with the theoretical intent of rendering the hypothesis test variance-free. This newly proposed indicator will prove unbiased in case of multiple CS2 sources of spectral correlation, thus reducing the risk of false alarms.
Correcting an analysis of variance for clustering.
Hedges, Larry V; Rhoads, Christopher H
2011-02-01
A great deal of educational and social data arises from cluster sampling designs where clusters involve schools, classrooms, or communities. A mistake that is sometimes encountered in the analysis of such data is to ignore the effect of clustering and analyse the data as if it were based on a simple random sample. This typically leads to an overstatement of the precision of results and too liberal conclusions about precision and statistical significance of mean differences. This paper gives simple corrections to the test statistics that would be computed in an analysis of variance if clustering were (incorrectly) ignored. The corrections are multiplicative factors depending on the total sample size, the cluster size, and the intraclass correlation structure. For example, the corrected F statistic has Fisher's F distribution with reduced degrees of freedom. The corrected statistic reduces to the F statistic computed by ignoring clustering when the intraclass correlations are zero. It reduces to the F statistic computed using cluster means when the intraclass correlations are unity, and it is in between otherwise. A similar adjustment to the usual statistic for testing a linear contrast among group means is described.
Estimating discharge measurement uncertainty using the interpolated variance estimator
Cohn, T.; Kiang, J.; Mason, R.
2012-01-01
Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.
On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series
Fransson, Peter
2016-01-01
Abstract Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box–Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed. PMID:27784176
On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series.
Thompson, William Hedley; Fransson, Peter
2016-12-01
Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box-Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed.
Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†
Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia
2015-01-01
Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144
Milnacipran: a unique antidepressant?
Kasper, Siegfried; Pail, Gerald
2010-09-07
Tricyclic antidepressants (TCAs) are among the most effective antidepressants available, although their poor tolerance at usual recommended doses and toxicity in overdose make them difficult to use. While selective serotonin reuptake inhibitors (SSRIs) are better tolerated than TCAs, they have their own specific problems, such as the aggravation of sexual dysfunction, interaction with coadministered drugs, and for many, a discontinuation syndrome. In addition, some of them appear to be less effective than TCAs in more severely depressed patients. Increasing evidence of the importance of norepinephrine in the etiology of depression has led to the development of a new generation of antidepressants, the serotonin and norepinephrine reuptake inhibitors (SNRIs). Milnacipran, one of the pioneer SNRIs, was designed from theoretic considerations to be more effective than SSRIs and better tolerated than TCAs, and with a simple pharmacokinetic profile. Milnacipran has the most balanced potency ratio for reuptake inhibition of the two neurotransmitters compared with other SNRIs (1:1.6 for milnacipran, 1:10 for duloxetine, and 1:30 for venlafaxine), and in some studies milnacipran has been shown to inhibit norepinephrine uptake with greater potency than serotonin (2.2:1). Clinical studies have shown that milnacipran has efficacy comparable with the TCAs and is superior to SSRIs in severe depression. In addition, milnacipran is well tolerated, with a low potential for pharmacokinetic drug-drug interactions. Milnacipran is a first-line therapy suitable for most depressed patients. It is frequently successful when other treatments fail for reasons of efficacy or tolerability.
Hu, Pingsha; Maiti, Tapabrata
2011-01-01
Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS) to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request.
What motivates nonconformity? Uniqueness seeking blocks majority influence.
Imhoff, Roland; Erb, Hans-Peter
2009-03-01
A high need for uniqueness undermines majority influence. Need for uniqueness (a) is a psychological state in which individuals feel indistinguishable from others and (b) motivates compensatory acts to reestablish a sense of uniqueness. Three studies demonstrate that a strive for uniqueness motivates individuals to resist majority influence. In Study 1, the need for uniqueness was measured, and it was found that individuals high in need for uniqueness yielded less to majority influence than those low in need for uniqueness. In Study 2, participants who received personality feedback undermining their feeling of uniqueness agreed less with a majority (vs. minority) position. Study 3 replicated this effect and additionally demonstrated the motivational nature of the assumed mechanism: An alternative means that allowed participants to regain a feeling of uniqueness canceled out the effect of high need for uniqueness on majority influence.
Cahyadi, Muhammad; Park, Hee-Bok; Seo, Dong-Won; Jin, Shil; Choi, Nuri; Heo, Kang-Nyeong; Kang, Bo-Seok; Jo, Cheorun; Lee, Jun-Heon
2016-01-01
Quantitative trait locus (QTL) is a particular region of the genome containing one or more genes associated with economically important quantitative traits. This study was conducted to identify QTL regions for body weight and growth traits in purebred Korean native chicken (KNC). F1 samples (n = 595) were genotyped using 127 microsatellite markers and 8 single nucleotide polymorphisms that covered 2,616.1 centi Morgan (cM) of map length for 26 autosomal linkage groups. Body weight traits were measured every 2 weeks from hatch to 20 weeks of age. Weight of half carcass was also collected together with growth rate. A multipoint variance component linkage approach was used to identify QTLs for the body weight traits. Two significant QTLs for growth were identified on chicken chromosome 3 (GGA3) for growth 16 to18 weeks (logarithm of the odds [LOD] = 3.24, Nominal p value = 0.0001) and GGA4 for growth 6 to 8 weeks (LOD = 2.88, Nominal p value = 0.0003). Additionally, one significant QTL and three suggestive QTLs were detected for body weight traits in KNC; significant QTL for body weight at 4 weeks (LOD = 2.52, nominal p value = 0.0007) and suggestive QTL for 8 weeks (LOD = 1.96, Nominal p value = 0.0027) were detected on GGA4; QTLs were also detected for two different body weight traits: body weight at 16 weeks on GGA3 and body weight at 18 weeks on GGA19. Additionally, two suggestive QTLs for carcass weight were detected at 0 and 70 cM on GGA19. In conclusion, the current study identified several significant and suggestive QTLs that affect growth related traits in a unique resource pedigree in purebred KNC. This information will contribute to improving the body weight traits in native chicken breeds, especially for the Asian native chicken breeds. PMID:26732327
Genetic interactions affecting human gene expression identified by variance association mapping
Brown, Andrew Anand; Buil, Alfonso; Viñuela, Ana; Lappalainen, Tuuli; Zheng, Hou-Feng; Richards, J Brent; Small, Kerrin S; Spector, Timothy D; Dermitzakis, Emmanouil T; Durbin, Richard
2014-01-01
Non-additive interaction between genetic variants, or epistasis, is a possible explanation for the gap between heritability of complex traits and the variation explained by identified genetic loci. Interactions give rise to genotype dependent variance, and therefore the identification of variance quantitative trait loci can be an intermediate step to discover both epistasis and gene by environment effects (GxE). Using RNA-sequence data from lymphoblastoid cell lines (LCLs) from the TwinsUK cohort, we identify a candidate set of 508 variance associated SNPs. Exploiting the twin design we show that GxE plays a role in ∼70% of these associations. Further investigation of these loci reveals 57 epistatic interactions that replicated in a smaller dataset, explaining on average 4.3% of phenotypic variance. In 24 cases, more variance is explained by the interaction than their additive contributions. Using molecular phenotypes in this way may provide a route to uncovering genetic interactions underlying more complex traits. DOI: http://dx.doi.org/10.7554/eLife.01381.001 PMID:24771767
Estimating the encounter rate variance in distance sampling
Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.
2009-01-01
The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.
Event Segmentation Ability Uniquely Predicts Event Memory
Sargent, Jesse Q.; Zacks, Jeffrey M.; Hambrick, David Z.; Zacks, Rose T.; Kurby, Christopher A.; Bailey, Heather R.; Eisenberg, Michelle L.; Beck, Taylor M.
2013-01-01
Memory for everyday events plays a central role in tasks of daily living, autobiographical memory, and planning. Event memory depends in part on segmenting ongoing activity into meaningful units. This study examined the relationship between event segmentation and memory in a lifespan sample to answer the following question: Is the ability to segment activity into meaningful events a unique predictor of subsequent memory, or is the relationship between event perception and memory accounted for by general cognitive abilities? Two hundred and eight adults ranging from 20 to 79 years old segmented movies of everyday events and attempted to remember the events afterwards. They also completed psychometric ability tests and tests measuring script knowledge for everyday events. Event segmentation and script knowledge both explained unique variance in event memory above and beyond the psychometric measures, and did so as strongly in older as in younger adults. These results suggest that event segmentation is a basic cognitive mechanism, important for memory across the lifespan. PMID:23942350
Multiperiod Mean-Variance Portfolio Optimization via Market Cloning
Ankirchner, Stefan; Dermoune, Azzouz
2011-08-15
The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.
Network Structure and Biased Variance Estimation in Respondent Driven Sampling.
Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J
2015-01-01
This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.
Network Structure and Biased Variance Estimation in Respondent Driven Sampling
Verdery, Ashton M.; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J.
2015-01-01
This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927
RR-Interval variance of electrocardiogram for atrial fibrillation detection
NASA Astrophysics Data System (ADS)
Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.
2016-11-01
Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.
Simulations of the Hadamard Variance: Probability Distributions and Confidence Intervals.
Ashby, Neil; Patla, Bijunath
2016-04-01
Power-law noise in clocks and oscillators can be simulated by Fourier transforming a modified spectrum of white phase noise. This approach has been applied successfully to simulation of the Allan variance and the modified Allan variance in both overlapping and nonoverlapping forms. When significant frequency drift is present in an oscillator, at large sampling times the Allan variance overestimates the intrinsic noise, while the Hadamard variance is insensitive to frequency drift. The simulation method is extended in this paper to predict the Hadamard variance for the common types of power-law noise. Symmetric real matrices are introduced whose traces-the sums of their eigenvalues-are equal to the Hadamard variances, in overlapping or nonoverlapping forms, as well as for the corresponding forms of the modified Hadamard variance. We show that the standard relations between spectral densities and Hadamard variance are obtained with this method. The matrix eigenvalues determine probability distributions for observing a variance at an arbitrary value of the sampling interval τ, and hence for estimating confidence in the measurements.
Lebigre, Christophe; Arcese, Peter; Reid, Jane M
2013-07-01
Age-specific variances and covariances in reproductive success shape the total variance in lifetime reproductive success (LRS), age-specific opportunities for selection, and population demographic variance and effective size. Age-specific (co)variances in reproductive success achieved through different reproductive routes must therefore be quantified to predict population, phenotypic and evolutionary dynamics in age-structured populations. While numerous studies have quantified age-specific variation in mean reproductive success, age-specific variances and covariances in reproductive success, and the contributions of different reproductive routes to these (co)variances, have not been comprehensively quantified in natural populations. We applied 'additive' and 'independent' methods of variance decomposition to complete data describing apparent (social) and realised (genetic) age-specific reproductive success across 11 cohorts of socially monogamous but genetically polygynandrous song sparrows (Melospiza melodia). We thereby quantified age-specific (co)variances in male within-pair and extra-pair reproductive success (WPRS and EPRS) and the contributions of these (co)variances to the total variances in age-specific reproductive success and LRS. 'Additive' decomposition showed that within-age and among-age (co)variances in WPRS across males aged 2-4 years contributed most to the total variance in LRS. Age-specific (co)variances in EPRS contributed relatively little. However, extra-pair reproduction altered age-specific variances in reproductive success relative to the social mating system, and hence altered the relative contributions of age-specific reproductive success to the total variance in LRS. 'Independent' decomposition showed that the (co)variances in age-specific WPRS, EPRS and total reproductive success, and the resulting opportunities for selection, varied substantially across males that survived to each age. Furthermore, extra-pair reproduction increased
NASA Astrophysics Data System (ADS)
McCaffrey, Katherine; Bianco, Laura; Johnston, Paul; Wilczak, James M.
2017-03-01
Observations of turbulence in the planetary boundary layer are critical for developing and evaluating boundary layer parameterizations in mesoscale numerical weather prediction models. These observations, however, are expensive and rarely profile the entire boundary layer. Using optimized configurations for 449 and 915 MHz wind profiling radars during the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA), improvements have been made to the historical methods of measuring vertical velocity variance through the time series of vertical velocity, as well as the Doppler spectral width. Using six heights of sonic anemometers mounted on a 300 m tower, correlations of up to R2 = 0. 74 are seen in measurements of the large-scale variances from the radar time series and R2 = 0. 79 in measurements of small-scale variance from radar spectral widths. The total variance, measured as the sum of the small and large scales, agrees well with sonic anemometers, with R2 = 0. 79. Correlation is higher in daytime convective boundary layers than nighttime stable conditions when turbulence levels are smaller. With the good agreement with the in situ measurements, highly resolved profiles up to 2 km can be accurately observed from the 449 MHz radar and 1 km from the 915 MHz radar. This optimized configuration will provide unique observations for the verification and improvement to boundary layer parameterizations in mesoscale models.
A NEW VARIANCE ESTIMATOR FOR PARAMETERS OF SEMI-PARAMETRIC GENERALIZED ADDITIVE MODELS. (R829213)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
VARIANCES MAY BE UNDERESTIMATED USING AVAILABLE SOFTWARE FOR GENERALIZED ADDITIVE MODELS. (R829213)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-17
..., channels, or shore-line or river-bank protection systems such as revetments, sand dunes, and barrier islands. b. New federally authorized cost-shared levee projects shall be designed to meet the...
A Note on Noncentrality Parameters for Contrast Tests in a One-Way Analysis of Variance
ERIC Educational Resources Information Center
Liu, Xiaofeng Steven
2010-01-01
The noncentrality parameter for a contrast test in a one-way analysis of variance is based on the dot product of 2 vectors whose geometric meaning in a Euclidian space offers mnemonic hints about its constituents. Additionally, the noncentrality parameters for a set of orthogonal contrasts sum up to the noncentrality parameter for the omnibus…
Uniqueness Theorem for Black Objects
Rogatko, Marek
2010-06-23
We shall review the current status of uniqueness theorem for black objects in higher dimensional spacetime. At the beginning we consider static charged asymptotically flat spacelike hypersurface with compact interior with both degenerate and non-degenerate components of the event horizon in n-dimensional spacetime. We gave some remarks concerning partial results in proving uniqueness of stationary axisymmetric multidimensional solutions and winding numbers which can uniquely characterize the topology and symmetry structure of black objects.
Marini, Federico; de Beer, Dalene; Joubert, Elizabeth; Walczak, Beata
2015-07-31
Direct application of popular approaches, e.g., Principal Component Analysis (PCA) or Partial Least Squares (PLS) to chromatographic data originating from a well-designed experimental study including more than one factor is not recommended. In the case of a well-designed experiment involving two or more factors (crossed or nested), data are usually decomposed into the contributions associated with the studied factors (and with their interactions), and the individual effect matrices are then analyzed using, e.g., PCA, as in the case of ASCA (analysis of variance combined with simultaneous component analysis). As an alternative to the ASCA method, we propose the application of PLS followed by target projection (TP), which allows a one-factor representation of the model for each column in the design dummy matrix. PLS application follows after proper deflation of the experimental matrix, i.e., to what are called the residuals under the reduced ANOVA model. The proposed approach (ANOVA-TP) is well suited for the study of designed chromatographic data of complex samples. It allows testing of statistical significance of the studied effects, 'biomarker' identification, and enables straightforward visualization and accurate estimation of between- and within-class variance. The proposed approach has been successfully applied to a case study aimed at evaluating the effect of pasteurization on the concentrations of various phenolic constituents of rooibos tea of different quality grades and its outcomes have been compared to those of ASCA.
Decomposing genomic variance using information from GWA, GWE and eQTL analysis.
Ehsani, A; Janss, L; Pomp, D; Sørensen, P
2016-04-01
A commonly used procedure in genome-wide association (GWA), genome-wide expression (GWE) and expression quantitative trait locus (eQTL) analyses is based on a bottom-up experimental approach that attempts to individually associate molecular variants with complex traits. Top-down modeling of the entire set of genomic data and partitioning of the overall variance into subcomponents may provide further insight into the genetic basis of complex traits. To test this approach, we performed a whole-genome variance components analysis and partitioned the genomic variance using information from GWA, GWE and eQTL analyses of growth-related traits in a mouse F2 population. We characterized the mouse trait genetic architecture by ordering single nucleotide polymorphisms (SNPs) based on their P-values and studying the areas under the curve (AUCs). The observed traits were found to have a genomic variance profile that differed significantly from that expected of a trait under an infinitesimal model. This situation was particularly true for both body weight and body fat, for which the AUCs were much higher compared with that of glucose. In addition, SNPs with a high degree of trait-specific regulatory potential (SNPs associated with subset of transcripts that significantly associated with a specific trait) explained a larger proportion of the genomic variance than did SNPs with high overall regulatory potential (SNPs associated with transcripts using traditional eQTL analysis). We introduced AUC measures of genomic variance profiles that can be used to quantify relative importance of SNPs as well as degree of deviation of a trait's inheritance from an infinitesimal model. The shape of the curve aids global understanding of traits: The steeper the left-hand side of the curve, the fewer the number of SNPs controlling most of the phenotypic variance.
Lande, Russell; Porcher, Emmanuelle
2015-01-01
We analyze two models of the maintenance of quantitative genetic variance in a mixed-mating system of self-fertilization and outcrossing. In both models purely additive genetic variance is maintained by mutation and recombination under stabilizing selection on the phenotype of one or more quantitative characters. The Gaussian allele model (GAM) involves a finite number of unlinked loci in an infinitely large population, with a normal distribution of allelic effects at each locus within lineages selfed for τ consecutive generations since their last outcross. The infinitesimal model for partial selfing (IMS) involves an infinite number of loci in a large but finite population, with a normal distribution of breeding values in lineages of selfing age τ. In both models a stable equilibrium genetic variance exists, the outcrossed equilibrium, nearly equal to that under random mating, for all selfing rates, r, up to critical value, r^, the purging threshold, which approximately equals the mean fitness under random mating relative to that under complete selfing. In the GAM a second stable equilibrium, the purged equilibrium, exists for any positive selfing rate, with genetic variance less than or equal to that under pure selfing; as r increases above r^ the outcrossed equilibrium collapses sharply to the purged equilibrium genetic variance. In the IMS a single stable equilibrium genetic variance exists at each selfing rate; as r increases above r^ the equilibrium genetic variance drops sharply and then declines gradually to that maintained under complete selfing. The implications for evolution of selfing rates, and for adaptive evolution and persistence of predominantly selfing species, provide a theoretical basis for the classical view of Stebbins that predominant selfing constitutes an “evolutionary dead end.” PMID:25969460
Lande, Russell; Porcher, Emmanuelle
2015-07-01
We analyze two models of the maintenance of quantitative genetic variance in a mixed-mating system of self-fertilization and outcrossing. In both models purely additive genetic variance is maintained by mutation and recombination under stabilizing selection on the phenotype of one or more quantitative characters. The Gaussian allele model (GAM) involves a finite number of unlinked loci in an infinitely large population, with a normal distribution of allelic effects at each locus within lineages selfed for τ consecutive generations since their last outcross. The infinitesimal model for partial selfing (IMS) involves an infinite number of loci in a large but finite population, with a normal distribution of breeding values in lineages of selfing age τ. In both models a stable equilibrium genetic variance exists, the outcrossed equilibrium, nearly equal to that under random mating, for all selfing rates, r, up to critical value, [Formula: see text], the purging threshold, which approximately equals the mean fitness under random mating relative to that under complete selfing. In the GAM a second stable equilibrium, the purged equilibrium, exists for any positive selfing rate, with genetic variance less than or equal to that under pure selfing; as r increases above [Formula: see text] the outcrossed equilibrium collapses sharply to the purged equilibrium genetic variance. In the IMS a single stable equilibrium genetic variance exists at each selfing rate; as r increases above [Formula: see text] the equilibrium genetic variance drops sharply and then declines gradually to that maintained under complete selfing. The implications for evolution of selfing rates, and for adaptive evolution and persistence of predominantly selfing species, provide a theoretical basis for the classical view of Stebbins that predominant selfing constitutes an "evolutionary dead end."
NASA Astrophysics Data System (ADS)
Lee, Changho; Cheon, Gyeongwoo; Kim, Do-Hyun; Kang, Jin U.
2016-12-01
We performed the feasibility study using speckle variance optical coherence tomography (SvOCT) to monitor the thermally induced protein denaturation and coagulation process as a function of temperature and depth. SvOCT provided the depth-resolved image of protein denaturation and coagulation with microscale resolution. This study was conducted using egg white. During the heating process, as the temperature increased, increases in the speckle variance signal was observed as the egg white proteins coagulated. Additionally, by calculating the cross-correlation coefficient in specific areas, denaturized egg white conditions were successfully estimated. These results indicate that SvOCT could be used to monitor the denaturation process of various proteins.
Fractional Brownian Motion with Stochastic Variance:. Modeling Absolute Returns in STOCK Markets
NASA Astrophysics Data System (ADS)
Roman, H. E.; Porto, M.
We discuss a model for simulating a long-time memory in time series characterized in addition by a stochastic variance. The model is based on a combination of fractional Brownian motion (FBM) concepts, for dealing with the long-time memory, with an autoregressive scheme with conditional heteroskedasticity (ARCH), responsible for the stochastic variance of the series, and is denoted as FBMARCH. Unlike well-known fractionally integrated autoregressive models, FBMARCH admits finite second moments. The resulting probability distribution functions have power-law tails with exponents similar to ARCH models. This idea is applied to the description of long-time autocorrelations of absolute returns ubiquitously observed in stock markets.
NASA Technical Reports Server (NTRS)
Longman, Richard W.; Bergmann, Martin; Juang, Jer-Nan
1988-01-01
For the ERA system identification algorithm, perturbation methods are used to develop expressions for variance and bias of the identified modal parameters. Based on the statistics of the measurement noise, the variance results serve as confidence criteria by indicating how likely the true parameters are to lie within any chosen interval about their identified values. This replaces the use of expensive and time-consuming Monte Carlo computer runs to obtain similar information. The bias estimates help guide the ERA user in his choice of which data points to use and how much data to use in order to obtain the best results, performing the trade-off between the bias and scatter. Also, when the uncertainty in the bias is sufficiently small, the bias information can be used to correct the ERA results. In addition, expressions for the variance and bias of the singular values serve as tools to help the ERA user decide the proper modal order.
On discrete stochastic processes with long-lasting time dependence in the variance
NASA Astrophysics Data System (ADS)
Queirós, S. M. D.
2008-11-01
In this manuscript, we analytically and numerically study statistical properties of an heteroskedastic process based on the celebrated ARCH generator of random variables whose variance is defined by a memory of qm-exponencial, form (eqm=1 x=ex). Specifically, we inspect the self-correlation function of squared random variables as well as the kurtosis. In addition, by numerical procedures, we infer the stationary probability density function of both of the heteroskedastic random variables and the variance, the multiscaling properties, the first-passage times distribution, and the dependence degree. Finally, we introduce an asymmetric variance version of the model that enables us to reproduce the so-called leverage effect in financial markets.
Bogaerts, Louisa; Siegelman, Noam; Frost, Ram
2016-08-01
What determines individuals' efficacy in detecting regularities in visual statistical learning? Our theoretical starting point assumes that the variance in performance of statistical learning (SL) can be split into the variance related to efficiency in encoding representations within a modality and the variance related to the relative computational efficiency of detecting the distributional properties of the encoded representations. Using a novel methodology, we dissociated encoding from higher-order learning factors, by independently manipulating exposure duration and transitional probabilities in a stream of visual shapes. Our results show that the encoding of shapes and the retrieving of their transitional probabilities are not independent and additive processes, but interact to jointly determine SL performance. The theoretical implications of these findings for a mechanistic explanation of SL are discussed.
29 CFR 1905.5 - Effect of variances.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 5 2010-07-01 2010-07-01 false Effect of variances. 1905.5 Section 1905.5 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RULES OF PRACTICE FOR VARIANCES, LIMITATIONS, VARIATIONS, TOLERANCES, AND EXEMPTIONS UNDER THE WILLIAMS-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 36 Parks, Forests, and Public Property 1 2013-07-01 2013-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 36 Parks, Forests, and Public Property 1 2012-07-01 2012-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 36 Parks, Forests, and Public Property 1 2014-07-01 2014-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 36 Parks, Forests, and Public Property 1 2011-07-01 2011-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 24 2013-07-01 2013-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 23 2011-07-01 2011-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....
Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances
ERIC Educational Resources Information Center
Jan, Show-Li; Shieh, Gwowen
2014-01-01
The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…
76 FR 78698 - Proposed Revocation of Permanent Variances
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-19
... Occupational Safety and Health Administration Proposed Revocation of Permanent Variances AGENCY: Occupational... short and plain statement detailing (1) how the proposed revocation would affect the requesting party..., subpart L. The following table provides information about the variances proposed for revocation by...
Gender Variance and Educational Psychology: Implications for Practice
ERIC Educational Resources Information Center
Yavuz, Carrie
2016-01-01
The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…
42 CFR 456.522 - Content of request for variance.
Code of Federal Regulations, 2011 CFR
2011-10-01
... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time..., mental hospital, and ICF located within a 50-mile radius of the facility; (e) The distance and...
A Study of Variance Estimation Methods. Working Paper Series.
ERIC Educational Resources Information Center
Zhang, Fan; Weng, Stanley; Salvucci, Sameena; Hu, Ming-xiu
This working paper contains reports of five studies of variance estimation methods. The first, An Empirical Study of Poststratified Estimator, by Fan Zhang uses data from the National Household Education Survey to illustrate use of poststratified estimation. The second paper, BRR Variance Estimation Using BPLX Hadamard Procedure, by Stanley Weng…
Genotypic-specific variance in Caenorhabditis elegans lifetime fecundity
Diaz, S Anaid; Viney, Mark
2014-01-01
Organisms live in heterogeneous environments, so strategies that maximze fitness in such environments will evolve. Variation in traits is important because it is the raw material on which natural selection acts during evolution. Phenotypic variation is usually thought to be due to genetic variation and/or environmentally induced effects. Therefore, genetically identical individuals in a constant environment should have invariant traits. Clearly, genetically identical individuals do differ phenotypically, usually thought to be due to stochastic processes. It is now becoming clear, especially from studies of unicellular species, that phenotypic variance among genetically identical individuals in a constant environment can be genetically controlled and that therefore, in principle, this can be subject to selection. However, there has been little investigation of these phenomena in multicellular species. Here, we have studied the mean lifetime fecundity (thus a trait likely to be relevant to reproductive success), and variance in lifetime fecundity, in recently-wild isolates of the model nematode Caenorhabditis elegans. We found that these genotypes differed in their variance in lifetime fecundity: some had high variance in fecundity, others very low variance. We find that this variance in lifetime fecundity was negatively related to the mean lifetime fecundity of the lines, and that the variance of the lines was positively correlated between environments. We suggest that the variance in lifetime fecundity may be a bet-hedging strategy used by this species. PMID:25360248
Conceptual Complexity and the Bias/Variance Tradeoff
ERIC Educational Resources Information Center
Briscoe, Erica; Feldman, Jacob
2011-01-01
In this paper we propose that the conventional dichotomy between exemplar-based and prototype-based models of concept learning is helpfully viewed as an instance of what is known in the statistical learning literature as the "bias/variance tradeoff". The bias/variance tradeoff can be thought of as a sliding scale that modulates how closely any…
Variances and Covariances of Kendall's Tau and Their Estimation.
ERIC Educational Resources Information Center
Cliff, Norman; Charlin, Ventura
1991-01-01
Variance formulas of H. E. Daniels and M. G. Kendall (1947) are generalized to allow for the presence of ties and variance of the sample tau correlation. Applications of these generalized formulas are discussed and illustrated using data from a 1965 study of contraceptive use in 15 developing countries. (SLD)
2012-09-05
This final rule adopts the standard for a national unique health plan identifier (HPID) and establishes requirements for the implementation of the HPID. In addition, it adopts a data element that will serve as an other entity identifier (OEID), or an identifier for entities that are not health plans, health care providers, or individuals, but that need to be identified in standard transactions. This final rule also specifies the circumstances under which an organization covered health care provider must require certain noncovered individual health care providers who are prescribers to obtain and disclose a National Provider Identifier (NPI). Lastly, this final rule changes the compliance date for the International Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM) for diagnosis coding, including the Official ICD-10-CM Guidelines for Coding and Reporting, and the International Classification of Diseases, 10th Revision, Procedure Coding System (ICD-10-PCS) for inpatient hospital procedure coding, including the Official ICD-10-PCS Guidelines for Coding and Reporting, from October 1, 2013 to October 1, 2014.
Spencer, Michael
1974-01-01
Food additives are discussed from the food technology point of view. The reasons for their use are summarized: (1) to protect food from chemical and microbiological attack; (2) to even out seasonal supplies; (3) to improve their eating quality; (4) to improve their nutritional value. The various types of food additives are considered, e.g. colours, flavours, emulsifiers, bread and flour additives, preservatives, and nutritional additives. The paper concludes with consideration of those circumstances in which the use of additives is (a) justified and (b) unjustified. PMID:4467857
Exploring Unique Roles for Psychologists
ERIC Educational Resources Information Center
Ahmed, Mohiuddin; Boisvert, Charles M.
2005-01-01
This paper presents comments on "Psychological Treatments" by D. H. Barlow. Barlow highlighted unique roles that psychologists can play in mental health service delivery by providing psychological treatments--treatments that psychologists would be uniquely qualified to design and deliver. In support of Barlow's position, the authors draw from…
ERIC Educational Resources Information Center
Shipman, Barbara A.
2013-01-01
This article analyzes four questions on the meaning of uniqueness that have contrasting answers in common language versus mathematical language. The investigations stem from a scenario in which students interpreted uniqueness according to a definition from standard English, that is, different from the mathematical meaning, in defining an injective…
Confabulators mistake multiplicity for uniqueness.
Serra, Mara; La Corte, Valentina; Migliaccio, Raffaella; Brazzarola, Marta; Zannoni, Ilaria; Pradat-Diehl, Pascale; Dalla Barba, Gianfranco
2014-09-01
Some patients with organic amnesia show confabulation, the production of statements and actions unintentionally incongruous to the subject's history, present and future situation. It has been shown that confabulators tend to report as unique and specific personal memories, events or actions that belong to their habits and routines (Habits Confabulations). We consider that habits and routines can be characterized by multiplicity, as opposed to uniqueness. This paper examines this phenomenon whereby confabulators mistake multiplicity, i.e., repeated events, for uniqueness, i.e., events that occurred in a unique and specific temporo-spatial context. In order to measure the ability to discriminate unique from repeated events we used four runs of a recognition memory task, in which some items were seen only once at study, whereas others were seen four times. Confabulators, but not non-confabulating amnesiacs (NCA), considered repeated items as unique, thus mistaking multiplicity for uniqueness. This phenomenon has been observed clinically but our study is the first to demonstrate it experimentally. We suggest that a crucial mechanism involved in the production of confabulations is thus the confusion between unique and repeated events.
ERIC Educational Resources Information Center
Castellanos-Ryan, Natalie; Conrod, Patricia J.
2011-01-01
Externalising behaviours such as substance misuse (SM) and conduct disorder (CD) symptoms highly co-ocurr in adolescence. While disinhibited personality traits have been consistently linked to externalising behaviours there is evidence that these traits may relate differentially to SM and CD. The current study aimed to assess whether this was the…
CYP1B1: a unique gene with unique characteristics.
Faiq, Muneeb A; Dada, Rima; Sharma, Reetika; Saluja, Daman; Dada, Tanuj
2014-01-01
CYP1B1, a recently described dioxin inducible oxidoreductase, is a member of the cytochrome P450 superfamily involved in the metabolism of estradiol, retinol, benzo[a]pyrene, tamoxifen, melatonin, sterols etc. It plays important roles in numerous physiological processes and is expressed at mRNA level in many tissues and anatomical compartments. CYP1B1 has been implicated in scores of disorders. Analyses of the recent studies suggest that CYP1B1 can serve as a universal/ideal cancer marker and a candidate gene for predictive diagnosis. There is plethora of literature available about certain aspects of CYP1B1 that have not been interpreted, discussed and philosophized upon. The present analysis examines CYP1B1 as a peculiar gene with certain distinctive characteristics like the uniqueness in its chromosomal location, gene structure and organization, involvement in developmentally important disorders, tissue specific, not only expression, but splicing, potential as a universal cancer marker due to its involvement in key aspects of cellular metabolism, use in diagnosis and predictive diagnosis of various diseases and the importance and function of CYP1B1 mRNA in addition to the regular translation. Also CYP1B1 is very difficult to express in heterologous expression systems, thereby, halting its functional studies. Here we review and analyze these exceptional and startling characteristics of CYP1B1 with inputs from our own experiences in order to get a better insight into its molecular biology in health and disease. This may help to further understand the etiopathomechanistic aspects of CYP1B1 mediated diseases paving way for better research strategies and improved clinical management.
Unique associations between anxiety, depression and motives for approach and avoidance goal pursuit.
Winch, Alison; Moberly, Nicholas J; Dickson, Joanne M
2015-01-01
This study investigated the shared and distinct associations between depressive and anxious symptoms and motives for pursuing personal goals. One hundred and thirty-six undergraduates generated approach and avoidance goals and rated each on intrinsic, identified, introjected and external motives. Anxious and depressive symptoms showed significant unique associations with distinct motives. Specifically, depressive symptoms predicted significant unique variance in intrinsic motivation for approach goals (but not avoidance goals), whereas anxious symptoms predicted significant unique variance in introjected regulation for approach and avoidance goals. Some of these findings were moderated by gender. The findings broadly support the notion that depression is uniquely characterised by reduced enjoyment of approach goal pursuit whereas anxiety is uniquely characterised by pursuit of goals in order to avoid negative outcomes. We suggest that these findings are compatible with regulatory focus theory and suggest that motives for goal pursuit are important in understanding the relation between goals and specific mood disorder symptoms.
Code of Federal Regulations, 2011 CFR
2011-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
Code of Federal Regulations, 2013 CFR
2013-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
Code of Federal Regulations, 2014 CFR
2014-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
Code of Federal Regulations, 2010 CFR
2010-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
Code of Federal Regulations, 2012 CFR
2012-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
Comparing estimates of genetic variance across different relationship models.
Legarra, Andres
2016-02-01
Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities".
Filtered kriging for spatial data with heterogeneous measurement error variances.
Christensen, William F
2011-09-01
When predicting values for the measurement-error-free component of an observed spatial process, it is generally assumed that the process has a common measurement error variance. However, it is often the case that each measurement in a spatial data set has a known, site-specific measurement error variance, rendering the observed process nonstationary. We present a simple approach for estimating the semivariogram of the unobservable measurement-error-free process using a bias adjustment of the classical semivariogram formula. We then develop a new kriging predictor that filters the measurement errors. For scenarios where each site's measurement error variance is a function of the process of interest, we recommend an approach that also uses a variance-stabilizing transformation. The properties of the heterogeneous variance measurement-error-filtered kriging (HFK) predictor and variance-stabilized HFK predictor, and the improvement of these approaches over standard measurement-error-filtered kriging are demonstrated using simulation. The approach is illustrated with climate model output from the Hudson Strait area in northern Canada. In the illustration, locations with high or low measurement error variances are appropriately down- or upweighted in the prediction of the underlying process, yielding a realistically smooth picture of the phenomenon of interest.
Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation
NASA Technical Reports Server (NTRS)
Wu, Dong L.; Eckermann, Stephen D.
2008-01-01
The gravity wave (GW)-resolving capabilities of 118-GHz saturated thermal radiances acquired throughout the stratosphere by the Microwave Limb Sounder (MLS) on the Aura satellite are investigated and initial results presented. Because the saturated (optically thick) radiances resolve GW perturbations from a given altitude at different horizontal locations, variances are evaluated at 12 pressure altitudes between 21 and 51 km using the 40 saturated radiances found at the bottom of each limb scan. Forward modeling simulations show that these variances are controlled mostly by GWs with vertical wavelengths z 5 km and horizontal along-track wavelengths of y 100-200 km. The tilted cigar-shaped three-dimensional weighting functions yield highly selective responses to GWs of high intrinsic frequency that propagate toward the instrument. The latter property is used to infer the net meridional component of GW propagation by differencing the variances acquired from ascending (A) and descending (D) orbits. Because of improved vertical resolution and sensitivity, Aura MLS GW variances are 5?8 times larger than those from the Upper Atmosphere Research Satellite (UARS) MLS. Like UARS MLS variances, monthly-mean Aura MLS variances in January and July 2005 are enhanced when local background wind speeds are large, due largely to GW visibility effects. Zonal asymmetries in variance maps reveal enhanced GW activity at high latitudes due to forcing by flow over major mountain ranges and at tropical and subtropical latitudes due to enhanced deep convective generation as inferred from contemporaneous MLS cloud-ice data. At 21-28-km altitude (heights not measured by the UARS MLS), GW variance in the tropics is systematically enhanced and shows clear variations with the phase of the quasi-biennial oscillation, in general agreement with GW temperature variances derived from radiosonde, rocketsonde, and limb-scan vertical profiles.
2013-01-01
Background Through social interactions, individuals affect one another’s phenotype. In such cases, an individual’s phenotype is affected by the direct (genetic) effect of the individual itself and the indirect (genetic) effects of the group mates. Using data on individual phenotypes, direct and indirect genetic (co)variances can be estimated. Together, they compose the total genetic variance that determines a population’s potential to respond to selection. However, it can be difficult or expensive to obtain individual phenotypes. Phenotypes on traits such as egg production and feed intake are, therefore, often collected on group level. In this study, we investigated whether direct, indirect and total genetic variances, and breeding values can be estimated from pooled data (pooled by group). In addition, we determined the optimal group composition, i.e. the optimal number of families represented in a group to minimise the standard error of the estimates. Methods This study was performed in three steps. First, all research questions were answered by theoretical derivations. Second, a simulation study was conducted to investigate the estimation of variance components and optimal group composition. Third, individual and pooled survival records on 12 944 purebred laying hens were analysed to investigate the estimation of breeding values and response to selection. Results Through theoretical derivations and simulations, we showed that the total genetic variance can be estimated from pooled data, but the underlying direct and indirect genetic (co)variances cannot. Moreover, we showed that the most accurate estimates are obtained when group members belong to the same family. Additional theoretical derivations and data analyses on survival records showed that the total genetic variance and breeding values can be estimated from pooled data. Moreover, the correlation between the estimated total breeding values obtained from individual and pooled data was surprisingly
Chromatic visualization of reflectivity variance within hybridized directional OCT images
NASA Astrophysics Data System (ADS)
Makhijani, Vikram S.; Roorda, Austin; Bayabo, Jan Kristine; Tong, Kevin K.; Rivera-Carpio, Carlos A.; Lujan, Brandon J.
2013-03-01
This study presents a new method of visualizing hybridized images of retinal spectral domain optical coherence tomography (SDOCT) data comprised of varied directional reflectivity. Due to the varying reflectivity of certain retinal structures relative to angle of incident light, SDOCT images obtained with differing entry positions result in nonequivalent images of corresponding cellular and extracellular structures, especially within layers containing photoreceptor components. Harnessing this property, cross-sectional pathologic and non-pathologic macular images were obtained from multiple pupil entry positions using commercially-available OCT systems, and custom segmentation, alignment, and hybridization algorithms were developed to chromatically visualize the composite variance of reflectivity effects. In these images, strong relative reflectivity from any given direction visualizes as relative intensity of its corresponding color channel. Evident in non-pathologic images was marked enhancement of Henle's fiber layer (HFL) visualization and varying reflectivity patterns of the inner limiting membrane (ILM) and photoreceptor inner/outer segment junctions (IS/OS). Pathologic images displayed similar and additional patterns. Such visualization may allow a more intuitive understanding of structural and physiologic processes in retinal pathologies.
Diabetes: Unique to Older Adults
... Stroke Urinary Incontinence Related Documents PDF Choosing Wisely: Diabetes Tests and Treatments Download Related Video Join our e-newsletter! Aging & Health A to Z Diabetes Unique to Older Adults This section provides information ...
Hickey, John M; Veerkamp, Roel F; Calus, Mario P L; Mulder, Han A; Thompson, Robin
2009-02-09
Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sampling can be used to calculate approximations of the prediction error variance, which converge to the true values if enough samples are used. However, in practical situations the number of samples, which are computationally feasible, is limited. The objective of this study was to compare the convergence rate of different formulations of the prediction error variance calculated using Monte Carlo sampling. Four of these formulations were published, four were corresponding alternative versions, and two were derived as part of this study. The different formulations had different convergence rates and these were shown to depend on the number of samples and on the level of prediction error variance. Four formulations were competitive and these made use of information on either the variance of the estimated breeding value and on the variance of the true breeding value minus the estimated breeding value or on the covariance between the true and estimated breeding values.
Friede, Tim; Kieser, Meinhard
2013-01-01
The internal pilot study design allows for modifying the sample size during an ongoing study based on a blinded estimate of the variance thus maintaining the trial integrity. Various blinded sample size re-estimation procedures have been proposed in the literature. We compare the blinded sample size re-estimation procedures based on the one-sample variance of the pooled data with a blinded procedure using the randomization block information with respect to bias and variance of the variance estimators, and the distribution of the resulting sample sizes, power, and actual type I error rate. For reference, sample size re-estimation based on the unblinded variance is also included in the comparison. It is shown that using an unbiased variance estimator (such as the one using the randomization block information) for sample size re-estimation does not guarantee that the desired power is achieved. Moreover, in situations that are common in clinical trials, the variance estimator that employs the randomization block length shows a higher variability than the simple one-sample estimator and in turn the sample size resulting from the related re-estimation procedure. This higher variability can lead to a lower power as was demonstrated in the setting of noninferiority trials. In summary, the one-sample estimator obtained from the pooled data is extremely simple to apply, shows good performance, and is therefore recommended for application.
... or natural. Natural food additives include: Herbs or spices to add flavor to foods Vinegar for pickling ... Certain colors improve the appearance of foods. Many spices, as well as natural and man-made flavors, ...
The Placenta Harbors a Unique Microbiome
Aagaard, Kjersti; Ma, Jun; Antony, Kathleen M.; Ganu, Radhika; Petrosino, Joseph; Versalovic, James
2016-01-01
Humans and their microbiomes have coevolved as a physiologic community composed of distinct body site niches with metabolic and antigenic diversity. The placental microbiome has not been robustly interrogated, despite recent demonstrations of intracellular bacteria with diverse metabolic and immune regulatory functions. A population-based cohort of placental specimens collected under sterile conditions from 320 subjects with extensive clinical data was established for comparative 16S ribosomal DNA–based and whole-genome shotgun (WGS) metagenomic studies. Identified taxa and their gene carriage patterns were compared to other human body site niches, including the oral, skin, airway (nasal), vaginal, and gut microbiomes from nonpregnant controls. We characterized a unique placental microbiome niche, composed of nonpathogenic commensal microbiota from the Firmicutes, Tenericutes, Proteobacteria, Bacteroidetes, and Fusobacteria phyla. In aggregate, the placental microbiome profiles were most akin (Bray-Curtis dissimilarity <0.3) to the human oral microbiome. 16S-based operational taxonomic unit analyses revealed associations of the placental microbiome with a remote history of antenatal infection (permutational multivariate analysis of variance, P = 0.006), such as urinary tract infection in the first trimester, as well as with preterm birth <37 weeks (P = 0.001). PMID:24848255
40 CFR 59.509 - Can I get a variance?
Code of Federal Regulations, 2011 CFR
2011-07-01
...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...
40 CFR 59.509 - Can I get a variance?
Code of Federal Regulations, 2013 CFR
2013-07-01
...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...
RISK ANALYSIS, ANALYSIS OF VARIANCE: GETTING MORE FROM OUR DATA
Technology Transfer Automated Retrieval System (TEKTRAN)
Analysis of variance (ANOVA) and regression are common statistical techniques used to analyze agronomic experimental data and determine significant differences among yields due to treatments or other experimental factors. Risk analysis provides an alternate and complimentary examination of the same...
Some variance reduction methods for numerical stochastic homogenization.
Blanc, X; Le Bris, C; Legoll, F
2016-04-28
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here.
Houde, Aimee Lee S; Pitcher, Trevor E
2016-03-01
Full factorial breeding designs are useful for quantifying the amount of additive genetic, nonadditive genetic, and maternal variance that explain phenotypic traits. Such variance estimates are important for examining evolutionary potential. Traditionally, full factorial mating designs have been analyzed using a two-way analysis of variance, which may produce negative variance values and is not suited for unbalanced designs. Mixed-effects models do not produce negative variance values and are suited for unbalanced designs. However, extracting the variance components, calculating significance values, and estimating confidence intervals and/or power values for the components are not straightforward using traditional analytic methods. We introduce fullfact - an R package that addresses these issues and facilitates the analysis of full factorial mating designs with mixed-effects models. Here, we summarize the functions of the fullfact package. The observed data functions extract the variance explained by random and fixed effects and provide their significance. We then calculate the additive genetic, nonadditive genetic, and maternal variance components explaining the phenotype. In particular, we integrate nonnormal error structures for estimating these components for nonnormal data types. The resampled data functions are used to produce bootstrap-t confidence intervals, which can then be plotted using a simple function. We explore the fullfact package through a worked example. This package will facilitate the analyses of full factorial mating designs in R, especially for the analysis of binary, proportion, and/or count data types and for the ability to incorporate additional random and fixed effects and power analyses.
Hidden item variance in multiple mini-interview scores.
Zaidi, Nikki L Bibler; Swoboda, Christopher M; Kelcey, Benjamin M; Manuel, R Stephen
2017-05-01
The extant literature has largely ignored a potentially significant source of variance in multiple mini-interview (MMI) scores by "hiding" the variance attributable to the sample of attributes used on an evaluation form. This potential source of hidden variance can be defined as rating items, which typically comprise an MMI evaluation form. Due to its multi-faceted, repeated measures format, reliability for the MMI has been primarily evaluated using generalizability (G) theory. A key assumption of G theory is that G studies model the most important sources of variance to which a researcher plans to generalize. Because G studies can only attribute variance to the facets that are modeled in a G study, failure to model potentially substantial sources of variation in MMI scores can result in biased estimates of variance components. This study demonstrates the implications of hiding the item facet in MMI studies when true item-level effects exist. An extensive Monte Carlo simulation study was conducted to examine whether a commonly used hidden item, person-by-station (p × s|i) G study design results in biased estimated variance components. Estimates from this hidden item model were compared with estimates from a more complete person-by-station-by-item (p × s × i) model. Results suggest that when true item-level effects exist, the hidden item model (p × s|i) will result in biased variance components which can bias reliability estimates; therefore, researchers should consider using the more complete person-by-station-by-item model (p × s × i) when evaluating generalizability of MMI scores.
Allan variance of time series models for measurement data
NASA Astrophysics Data System (ADS)
Zhang, Nien Fan
2008-10-01
The uncertainty of the mean of autocorrelated measurements from a stationary process has been discussed in the literature. However, when the measurements are from a non-stationary process, how to assess their uncertainty remains unresolved. Allan variance or two-sample variance has been used in time and frequency metrology for more than three decades as a substitute for the classical variance to characterize the stability of clocks or frequency standards when the underlying process is a 1/f noise process. However, its applications are related only to the noise models characterized by the power law of the spectral density. In this paper, from the viewpoint of the time domain, we provide a statistical underpinning of the Allan variance for discrete stationary processes, random walk and long-memory processes such as the fractional difference processes including the noise models usually considered in time and frequency metrology. Results show that the Allan variance is a better measure of the process variation than the classical variance of the random walk and the non-stationary fractional difference processes including the 1/f noise.
Variance estimation in the analysis of microarray data.
Wang, Yuedong; Ma, Yanyuan; Carroll, Raymond J
2009-04-01
Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.
The evolution and consequences of sex-specific reproductive variance.
Mullon, Charles; Reuter, Max; Lehmann, Laurent
2014-01-01
Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction.
The Evolution and Consequences of Sex-Specific Reproductive Variance
Mullon, Charles; Reuter, Max; Lehmann, Laurent
2014-01-01
Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction. PMID:24172130
Variance estimation for systematic designs in spatial surveys.
Fewster, R M
2011-12-01
In spatial surveys for estimating the density of objects in a survey region, systematic designs will generally yield lower variance than random designs. However, estimating the systematic variance is well known to be a difficult problem. Existing methods tend to overestimate the variance, so although the variance is genuinely reduced, it is over-reported, and the gain from the more efficient design is lost. The current approaches to estimating a systematic variance for spatial surveys are to approximate the systematic design by a random design, or approximate it by a stratified design. Previous work has shown that approximation by a random design can perform very poorly, while approximation by a stratified design is an improvement but can still be severely biased in some situations. We develop a new estimator based on modeling the encounter process over space. The new "striplet" estimator has negligible bias and excellent precision in a wide range of simulation scenarios, including strip-sampling, distance-sampling, and quadrat-sampling surveys, and including populations that are highly trended or have strong aggregation of objects. We apply the new estimator to survey data for the spotted hyena (Crocuta crocuta) in the Serengeti National Park, Tanzania, and find that the reported coefficient of variation for estimated density is 20% using approximation by a random design, 17% using approximation by a stratified design, and 11% using the new striplet estimator. This large reduction in reported variance is verified by simulation.
A Bias and Variance Analysis for Multistep-Ahead Time Series Forecasting.
Ben Taieb, Souhaib; Atiya, Amir F
2016-01-01
Multistep-ahead forecasts can either be produced recursively by iterating a one-step-ahead time series model or directly by estimating a separate model for each forecast horizon. In addition, there are other strategies; some of them combine aspects of both aforementioned concepts. In this paper, we present a comprehensive investigation into the bias and variance behavior of multistep-ahead forecasting strategies. We provide a detailed review of the different multistep-ahead strategies. Subsequently, we perform a theoretical study that derives the bias and variance for a number of forecasting strategies. Finally, we conduct a Monte Carlo experimental study that compares and evaluates the bias and variance performance of the different strategies. From the theoretical and the simulation studies, we analyze the effect of different factors, such as the forecast horizon and the time series length, on the bias and variance components, and on the different multistep-ahead strategies. Several lessons are learned, and recommendations are given concerning the advantages, disadvantages, and best conditions of use of each strategy.
Wonnapinij, Passorn; Chinnery, Patrick F; Samuels, David C
2010-04-09
In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference.
Rudolf Keller
2004-08-10
In this project, a concept to improve the performance of aluminum production cells by introducing potlining additives was examined and tested. Boron oxide was added to cathode blocks, and titanium was dissolved in the metal pool; this resulted in the formation of titanium diboride and caused the molten aluminum to wet the carbonaceous cathode surface. Such wetting reportedly leads to operational improvements and extended cell life. In addition, boron oxide suppresses cyanide formation. This final report presents and discusses the results of this project. Substantial economic benefits for the practical implementation of the technology are projected, especially for modern cells with graphitized blocks. For example, with an energy savings of about 5% and an increase in pot life from 1500 to 2500 days, a cost savings of $ 0.023 per pound of aluminum produced is projected for a 200 kA pot.
Harrup, Mason K; Rollins, Harry W
2013-11-26
An additive comprising a phosphazene compound that has at least two reactive functional groups and at least one capping functional group bonded to phosphorus atoms of the phosphazene compound. One of the at least two reactive functional groups is configured to react with cellulose and the other of the at least two reactive functional groups is configured to react with a resin, such as an amine resin of a polycarboxylic acid resin. The at least one capping functional group is selected from the group consisting of a short chain ether group, an alkoxy group, or an aryloxy group. Also disclosed are an additive-resin admixture, a method of treating a wood product, and a wood product.
Estimation of Model Error Variances During Data Assimilation
NASA Technical Reports Server (NTRS)
Dee, Dick
2003-01-01
Data assimilation is all about understanding the error characteristics of the data and models that are used in the assimilation process. Reliable error estimates are needed to implement observational quality control, bias correction of observations and model fields, and intelligent data selection. Meaningful covariance specifications are obviously required for the analysis as well, since the impact of any single observation strongly depends on the assumed structure of the background errors. Operational atmospheric data assimilation systems still rely primarily on climatological background error covariances. To obtain error estimates that reflect both the character of the flow and the current state of the observing system, it is necessary to solve three problems: (1) how to account for the short-term evolution of errors in the initial conditions; (2) how to estimate the additional component of error caused by model defects; and (3) how to compute the error reduction in the analysis due to observational information. Various approaches are now available that provide approximate solutions to the first and third of these problems. However, the useful accuracy of these solutions very much depends on the size and character of the model errors and the ability to account for them. Model errors represent the real-world forcing of the error evolution in a data assimilation system. Clearly, meaningful model error estimates and/or statistics must be based on information external to the model itself. The most obvious information source is observational, and since the volume of available geophysical data is growing rapidly, there is some hope that a purely statistical approach to model error estimation can be viable. This requires that the observation errors themselves are well understood and quantifiable. We will discuss some of these challenges and present a new sequential scheme for estimating model error variances from observations in the context of an atmospheric data
Dekker, Marielle C.; Ziermans, Tim B.; Spruijt, Andrea M.; Swaab, Hanna
2017-01-01
Very little is known about the relative influence of cognitive performance-based executive functioning (EF) measures and behavioral EF ratings in explaining differences in children's school achievement. This study examined the shared and unique influence of these different EF measures on math and spelling outcome for a sample of 84 first and second graders. Parents and teachers completed the Behavior Rating Inventory of Executive Function (BRIEF), and children were tested with computer-based performance tests from the Amsterdam Neuropsychological Tasks (ANT). Mixed-model hierarchical regression analyses, including intelligence level and age, showed that cognitive performance and teacher's ratings of working memory and shifting concurrently explained differences in spelling. However, teacher's behavioral EF ratings did not explain any additional variance in math outcome above cognitive EF performance. Parent's behavioral EF ratings did not add any unique information for either outcome measure. This study provides support for the ecological validity of performance- and teacher rating-based EF measures, and shows that both measures could have a complementary role in identifying EF processes underlying spelling achievement problems. The early identification of strengths and weaknesses of a child's working memory and shifting capabilities, might help teachers to broaden their range of remedial intervention options to optimize school achievement. PMID:28194121
Uniqueness theorems in bioluminescence tomography.
Wang, Ge; Li, Yi; Jiang, Ming
2004-08-01
Motivated by bioluminescent imaging needs for studies on gene therapy and other applications in the mouse models, a bioluminescence tomography (BLT) system is being developed in the University of Iowa. While the forward imaging model is described by the well-known diffusion equation, the inverse problem is to recover an internal bioluminescent source distribution subject to Cauchy data. Our primary goal in this paper is to establish the solution uniqueness for BLT under practical constraints despite the ill-posedness of the inverse problem in the general case. After a review on the inverse source literature, we demonstrate that in the general case the BLT solution is not unique by constructing the set of all the solutions to this inverse problem. Then, we show the uniqueness of the solution in the case of impulse sources. Finally, we present our main theorem that solid/hollow ball sources can be uniquely determined up to nonradiating sources. For better readability, the exact conditions for and rigorous proofs of the theorems are given in the Appendices. Further research directions are also discussed.
Estimating Variances of Horizontal Wind Fluctuations in Stable Conditions
NASA Astrophysics Data System (ADS)
Luhar, Ashok K.
2010-05-01
Information concerning the average wind speed and the variances of lateral and longitudinal wind velocity fluctuations is required by dispersion models to characterise turbulence in the atmospheric boundary layer. When the winds are weak, the scalar average wind speed and the vector average wind speed need to be clearly distinguished and both lateral and longitudinal wind velocity fluctuations assume equal importance in dispersion calculations. We examine commonly-used methods of estimating these variances from wind-speed and wind-direction statistics measured separately, for example, by a cup anemometer and a wind vane, and evaluate the implied relationship between the scalar and vector wind speeds, using measurements taken under low-wind stable conditions. We highlight several inconsistencies inherent in the existing formulations and show that the widely-used assumption that the lateral velocity variance is equal to the longitudinal velocity variance is not necessarily true. We derive improved relations for the two variances, and although data under stable stratification are considered for comparison, our analysis is applicable more generally.
Analysis of Variance Components for Genetic Markers with Unphased Genotypes.
Wang, Tao
2016-01-01
An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions.
Practice reduces task relevant variance modulation and forms nominal trajectory
NASA Astrophysics Data System (ADS)
Osu, Rieko; Morishige, Ken-Ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo
2015-12-01
Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary.
Increased spatial variance accompanies reorganization of two continental shelf ecosystems.
Litzow, Michael A; Urban, J Daniel; Laurel, Benjamin J
2008-09-01
Phase transitions between alternate stable states in marine ecosystems lead to disruptive changes in ecosystem services, especially fisheries productivity. We used trawl survey data spanning phase transitions in the North Pacific (Gulf of Alaska) and the North Atlantic (Scotian Shelf) to test for increases in ecosystem variability that might provide early warning of such transitions. In both time series, elevated spatial variability in a measure of community composition (ratio of cod [Gadus sp.] abundance to prey abundance) accompanied transitions between ecosystem states, and variability was negatively correlated with distance from the ecosystem transition point. In the Gulf of Alaska, where the phase transition was apparently the result of a sudden perturbation (climate regime shift), variance increased one year before the transition in mean state occurred. On the Scotian Shelf, where ecosystem reorganization was the result of persistent overfishing, a significant increase in variance occurred three years before the transition in mean state was detected. However, we could not reject the alternate explanation that increased variance may also have simply been inherent to the final stable state in that ecosystem. Increased variance has been previously observed around transition points in models, but rarely in real ecosystems, and our results demonstrate the possible management value in tracking the variance of key parameters in exploited ecosystems.
Qiu, Maolin; Scheinost, Dustin; Ramani, Ramachandran; Constable, R Todd
2017-03-01
Anesthesia-induced changes in functional connectivity and cerebral blow flow (CBF) in large-scale brain networks have emerged as key markers of reduced consciousness. However, studies of functional connectivity disagree on which large-scale networks are altered or preserved during anesthesia, making it difficult to find a consensus amount studies. Additionally, pharmacological alterations in CBF could amplify or occlude changes in connectivity due to the shared variance between CBF and connectivity. Here, we used data-driven connectivity methods and multi-modal imaging to investigate shared and unique neural correlates of reduced consciousness for connectivity in large-scale brain networks. Rs-fMRI and CBF data were collected from the same subjects during an awake and deep sedation condition induced by propofol. We measured whole-brain connectivity using the intrinsic connectivity distribution (ICD), a method not reliant on pre-defined seed regions, networks of interest, or connectivity thresholds. The shared and unique variance between connectivity and CBF were investigated. Finally, to account for shared variance, we present a novel extension to ICD that incorporates cerebral blood flow (CBF) as a scaling factor in the calculation of global connectivity, labeled CBF-adjusted ICD). We observed altered connectivity in multiple large-scale brain networks including the default mode (DMN), salience, visual, and motor networks and reduced CBF in the DMN, frontoparietal network, and thalamus. Regional connectivity and CBF were significantly correlated during both the awake and propofol condition. Nevertheless changes in connectivity and CBF between the awake and deep sedation condition were only significantly correlated in a subsystem of the DMN, suggesting that, while there is significant shared variance between the modalities, changes due to propofol are relatively unique. Similar, but less significant, results were observed in the CBF-adjusted ICD analysis, providing
Unique stoichiometric representation for computational thermochemistry.
Fishtik, Ilie
2012-02-23
Evaluation of the enthalpy of formation of species via quantum chemical methods, as well as the evaluation of their performance, is mainly based on single reaction schemes, i.e., reaction schemes that involve a minimal number of reference species where minimal means that, if a reference species is omitted, there is no way to write a balanced reaction scheme involving the remaining species. When the number of reference species exceeds the minimal number, the main problem of computational thermochemistry is inevitably becoming an optimization problem. In this communication we present an exact and unique solution of the optimization problem in computational thermochemistry along with a stoichiometric interpretation of the solution. Namely, we prove that the optimization problem may be identically solved by enumerating a finite and unique set of reactions referred to as group additivity (GA) response reactions (RERs).
Losdat, Sylvain; Arcese, Peter; Reid, Jane M
2015-09-01
1. Extra-pair reproductive success (EPRS) is a key component of male fitness in socially monogamous systems and could cause selection on female extra-pair reproduction if extra-pair offspring (EPO) inherit high value for EPRS from their successful extra-pair fathers. However, EPRS is itself a composite trait that can be fully decomposed into subcomponents of variation, each of which can be further decomposed into genetic and environmental variances. However, such decompositions have not been implemented in wild populations, impeding evolutionary inference. 2. We first show that EPRS can be decomposed into the product of three life-history subcomponents: the number of broods available to a focal male to sire EPO, the male's probability of siring an EPO in an available brood and the number of offspring in available broods. This decomposition of EPRS facilitates estimation from field data because all subcomponents can be quantified from paternity data without need to quantify extra-pair matings. Our decomposition also highlights that the number of available broods, and hence population structure and demography, might contribute substantially to variance in male EPRS and fitness. 3. We then used 20 years of complete genetic paternity and pedigree data from wild song sparrows (Melospiza melodia) to partition variance in each of the three subcomponents of EPRS, and thereby estimate their additive genetic variance and heritability conditioned on effects of male coefficient of inbreeding, age and social status. 4. All three subcomponents of EPRS showed some degree of within-male repeatability, reflecting combined permanent environmental and genetic effects. Number of available broods and offspring per brood showed low additive genetic variances. The estimated additive genetic variance in extra-pair siring probability was larger, although the 95% credible interval still converged towards zero. Siring probability also showed inbreeding depression and increased with male age
Ontogenetic changes in genetic variances of age-dependent plasticity along a latitudinal gradient
Nilsson-Örtman, V; Rogell, B; Stoks, R; Johansson, F
2015-01-01
The expression of phenotypic plasticity may differ among life stages of the same organism. Age-dependent plasticity can be important for adaptation to heterogeneous environments, but this has only recently been recognized. Whether age-dependent plasticity is a common outcome of local adaptation and whether populations harbor genetic variation in this respect remains largely unknown. To answer these questions, we estimated levels of additive genetic variation in age-dependent plasticity in six species of damselflies sampled from 18 populations along a latitudinal gradient spanning 3600 km. We reared full sib larvae at three temperatures and estimated genetic variances in the height and slope of thermal reaction norms of body size at three points in time during ontogeny using random regression. Our data show that most populations harbor genetic variation in growth rate (reaction norm height) in all ontogenetic stages, but only some populations and ontogenetic stages were found to harbor genetic variation in thermal plasticity (reaction norm slope). Genetic variances in reaction norm height differed among species, while genetic variances in reaction norm slope differed among populations. The slope of the ontogenetic trend in genetic variances of both reaction norm height and slope increased with latitude. We propose that differences in genetic variances reflect temporal and spatial variation in the strength and direction of natural selection on growth trajectories and age-dependent plasticity. Selection on age-dependent plasticity may depend on the interaction between temperature seasonality and time constraints associated with variation in life history traits such as generation length. PMID:25649500
Saturation of number variance in embedded random-matrix ensembles.
Prakash, Ravi; Pandey, Akhilesh
2016-05-01
We study fluctuation properties of embedded random matrix ensembles of noninteracting particles. For ensemble of two noninteracting particle systems, we find that unlike the spectra of classical random matrices, correlation functions are nonstationary. In the locally stationary region of spectra, we study the number variance and the spacing distributions. The spacing distributions follow the Poisson statistics, which is a key behavior of uncorrelated spectra. The number variance varies linearly as in the Poisson case for short correlation lengths but a kind of regularization occurs for large correlation lengths, and the number variance approaches saturation values. These results are known in the study of integrable systems but are being demonstrated for the first time in random matrix theory. We conjecture that the interacting particle cases, which exhibit the characteristics of classical random matrices for short correlation lengths, will also show saturation effects for large correlation lengths.
Monte Carlo variance reduction approaches for non-Boltzmann tallies
Booth, T.E.
1992-12-01
Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed.
Impact of Damping Uncertainty on SEA Model Response Variance
NASA Technical Reports Server (NTRS)
Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand
2010-01-01
Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.
Theorems on positive data: on the uniqueness of NMF.
Laurberg, Hans; Christensen, Mads Graesbøll; Plumbley, Mark D; Hansen, Lars Kai; Jensen, Søren Holdt
2008-01-01
We investigate the conditions for which nonnegative matrix factorization (NMF) is unique and introduce several theorems which can determine whether the decomposition is in fact unique or not. The theorems are illustrated by several examples showing the use of the theorems and their limitations. We have shown that corruption of a unique NMF matrix by additive noise leads to a noisy estimation of the noise-free unique solution. Finally, we use a stochastic view of NMF to analyze which characterization of the underlying model will result in an NMF with small estimation errors.
Theorems on Positive Data: On the Uniqueness of NMF
Laurberg, Hans; Christensen, Mads Græsbøll; Plumbley, Mark D.; Hansen, Lars Kai; Jensen, Søren Holdt
2008-01-01
We investigate the conditions for which nonnegative matrix factorization (NMF) is unique and introduce several theorems which can determine whether the decomposition is in fact unique or not. The theorems are illustrated by several examples showing the use of the theorems and their limitations. We have shown that corruption of a unique NMF matrix by additive noise leads to a noisy estimation of the noise-free unique solution. Finally, we use a stochastic view of NMF to analyze which characterization of the underlying model will result in an NMF with small estimation errors. PMID:18497868
The mean and variance of phylogenetic diversity under rarefaction.
Nipperess, David A; Matsen, Frederick A
2013-06-01
Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.
Enhancing area of review capabilities: Implementing a variance program
De Leon, F.
1995-12-01
The Railroad Commission of Texas (RRC) has regulated oil-field injection well operations since issuing its first injection permit in 1938. The Environmental Protection Agency (EPA) granted the RRC primary enforcement responsibility for the Class H Underground Injection Control (UIC) Program in April 1982. At that time, the added level of groundwater protection afforded by an Area of Review (AOR) on previously permitted Class H wells was not deemed necessary or cost effective. A proposed EPA rule change will require AORs to be performed on all pre-primacy Class II wells unless a variance can be justified. A variance methodology has been developed by researchers at the University of Missouri-Rolla in conjunction with the American Petroleum Institute (API). This paper will outline the RRC approach to implementing the AOR variance methodology. The RRC`s UIC program tracks 49,256 pre-primacy wells. Approximately 25,598 of these wells have active permits and will be subject to the proposed AOR requirements. The potential workload of performing AORs or granting variances for this many wells makes the development of a Geographic Information System (GIS) imperative. The RRC has recently completed a digitized map of the entire state and has spotted 890,000 of an estimated 1.2 million wells. Integrating this digital state map into a GIS will allow the RRC to tie its many data systems together. Once in place, this integrated data system will be used to evaluate AOR variances for pre-primacy wells on a field-wide basis. It will also reduce the regulatory cost of permitting by allowing the RRC staff to perform AORs or grant variances for the approximately 3,000 new and amended permit applications requiring AORs each year.
The dynamic Allan Variance IV: characterization of atomic clock anomalies.
Galleani, Lorenzo; Tavella, Patrizia
2015-05-01
The number of applications where precise clocks play a key role is steadily increasing, satellite navigation being the main example. Precise clock anomalies are hence critical events, and their characterization is a fundamental problem. When an anomaly occurs, the clock stability changes with time, and this variation can be characterized with the dynamic Allan variance (DAVAR). We obtain the DAVAR for a series of common clock anomalies, namely, a sinusoidal term, a phase jump, a frequency jump, and a sudden change in the clock noise variance. These anomalies are particularly common in space clocks. Our analytic results clarify how the clock stability changes during these anomalies.
Entropy, Fisher Information and Variance with Frost-Musulin Potenial
NASA Astrophysics Data System (ADS)
Idiodi, J. O. A.; Onate, C. A.
2016-09-01
This study presents the Shannon and Renyi information entropy for both position and momentum space and the Fisher information for the position-dependent mass Schrödinger equation with the Frost-Musulin potential. The analysis of the quantum mechanical probability has been obtained via the Fisher information. The variance information of this potential is equally computed. This controls both the chemical properties and physical properties of some of the molecular systems. We have observed the behaviour of the Shannon entropy. Renyi entropy, Fisher information and variance with the quantum number n respectively.
Studying Variance in the Galactic Ultra-compact Binary Population
NASA Astrophysics Data System (ADS)
Larson, Shane; Breivik, Katelyn
2017-01-01
In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations on week-long timescales, thus allowing a full exploration of the variance associated with a binary stellar evolution model.
The principle of stationary variance in quantum field theory
NASA Astrophysics Data System (ADS)
Siringo, Fabio
2014-02-01
The principle of stationary variance is advocated as a viable variational approach to quantum field theory (QFT). The method is based on the principle that the variance of energy should be at its minimum when the state of a quantum system reaches its best approximation for an eigenstate. While not too much popular in quantum mechanics (QM), the method is shown to be valuable in QFT and three special examples are given in very different areas ranging from Heisenberg model of antiferromagnetism (AF) to quantum electrodynamics (QED) and gauge theories.
The Probabilities of Unique Events
2012-08-30
probabilities into quantum mechanics, and some psychologists have argued that they have a role to play in accounting for errors in judgment [30]. But, in...Discussion The mechanisms underlying naive estimates of the probabilities of unique events are largely inaccessible to consciousness , but they...Can quantum probability provide a new direc- tion for cognitive modeling? Behavioral and Brain Sciences (in press). 31. Paolacci G, Chandler J
Visscher, Peter M; Goddard, Michael E
2015-01-01
Heritability is a population parameter of importance in evolution, plant and animal breeding, and human medical genetics. It can be estimated using pedigree designs and, more recently, using relationships estimated from markers. We derive the sampling variance of the estimate of heritability for a wide range of experimental designs, assuming that estimation is by maximum likelihood and that the resemblance between relatives is solely due to additive genetic variation. We show that well-known results for balanced designs are special cases of a more general unified framework. For pedigree designs, the sampling variance is inversely proportional to the variance of relationship in the pedigree and it is proportional to 1/N, whereas for population samples it is approximately proportional to 1/N(2), where N is the sample size. Variation in relatedness is a key parameter in the quantification of the sampling variance of heritability. Consequently, the sampling variance is high for populations with large recent effective population size (e.g., humans) because this causes low variation in relationship. However, even using human population samples, low sampling variance is possible with high N.
Unique children in unique places: innovative pediatric community clinical.
Harrison, Suzanne; Laforest, Marie-Eve
2011-12-01
Pediatric nursing is a specialization that requires a particular set of skills and abilities. Most nurses seldom get the chance to interact with families who have children living with exceptionalities unless they choose to work in tertiary settings dealing exclusively with children. This article explores how one school of nursing in Canada offers its students two unique learning opportunities where they get the chance to work with children who have special needs in an interdisciplinary community-based setting. Shared statements from parents and students highlight the benefits to all those involved.
Analysis of T-RFLP data using analysis of variance and ordination methods: a comparative study.
Culman, S W; Gauch, H G; Blackwood, C B; Thies, J E
2008-09-01
The analysis of T-RFLP data has developed considerably over the last decade, but there remains a lack of consensus about which statistical analyses offer the best means for finding trends in these data. In this study, we empirically tested and theoretically compared ten diverse T-RFLP datasets derived from soil microbial communities using the more common ordination methods in the literature: principal component analysis (PCA), nonmetric multidimensional scaling (NMS) with Sørensen, Jaccard and Euclidean distance measures, correspondence analysis (CA), detrended correspondence analysis (DCA) and a technique new to T-RFLP data analysis, the Additive Main Effects and Multiplicative Interaction (AMMI) model. Our objectives were i) to determine the distribution of variation in T-RFLP datasets using analysis of variance (ANOVA), ii) to determine the more robust and informative multivariate ordination methods for analyzing T-RFLP data, and iii) to compare the methods based on theoretical considerations. For the 10 datasets examined in this study, ANOVA revealed that the variation from Environment main effects was always small, variation from T-RFs main effects was large, and variation from T-RFxEnvironment (TxE) interactions was intermediate. Larger variation due to TxE indicated larger differences in microbial communities between environments/treatments and thus demonstrated the utility of ANOVA to provide an objective assessment of community dissimilarity. The comparison of statistical methods typically yielded similar empirical results. AMMI, T-RF-centered PCA, and DCA were the most robust methods in terms of producing ordinations that consistently reached a consensus with other methods. In datasets with high sample heterogeneity, NMS analyses with Sørensen and Jaccard distance were the most sensitive for recovery of complex gradients. The theoretical comparison showed that some methods hold distinct advantages for T-RFLP analysis, such as estimations of variation
Age-specific patterns of genetic variance in Drosophila melanogaster. I. Mortality
Promislow, D.E.L.; Tatar, M.; Curtsinger, J.W.
1996-06-01
Peter Medawar proposed that senescence arises from an age-related decline in the force of selection, which allows late-acting deleterious mutations to accumulate. Subsequent workers have suggested that mutation accumulation could produce an age-related increase in additive genetic variance (V{sub A}) for fitness traits, as recently found in Drosophila melanogaster. Here we report results from a genetic analysis of mortality in 65,134 D. melanogaster. Additive genetic variance for female mortality rates increases from 0.007 in the first week of life to 0.325 by the third week, and then declines to 0.002 by the seventh week. Males show a similar pattern, though total variance is lower than in females. In contrast to a predicted divergence in mortality curves, mortality curves of different genotypes are roughly parallel. Using a three-parameter model, we find significant V{sub A} for the slope and constant term of the curve describing age-specific mortality rates, and also for the rate at which mortality decelerates late in life. These results fail to support a prediction derived from Medawar`s {open_quotes}mutation accumulation{close_quotes} theory for the evolution of senescence. However, our results could be consistent with alternative interpretations of evolutionary models of aging. 65 refs., 2 figs., 2 tabs.
Nurses in medical education: A unique opportunity.
Barnum, Trevor J; Thome, Lindsay; Even, Elizabeth
2016-11-13
Medical students are expected to learn certain procedural skills in addition to clinical skills, such as assessment and decision making. There is much literature that shows proficiency in procedural skills translated to improved outcomes and cost-saving. Given the time constraints placed by increasing clinical demands, physicians have less time to work with students in teaching technical skills. There is a unique opportunity to utilize nurses in clinical clerkships to teach procedural skills. A dedicated nurse educator can provide a consistent curriculum, work with learners to achieve proficiency, and provide measurable outcomes. Future research should explore the role played by nurses in medical education and the comparison of instructional effectiveness.
Further results related to variance past lifetime class & associated orderings and their properties
NASA Astrophysics Data System (ADS)
Mahdy, Mervat
2016-11-01
If the random variable T denotes the lifetime of a unit, then the random variable T(t) = [ t - T ∣ T ≤ t ] , for a fixed t > 0, is known as the past lifetime. In this study, we present some new properties of the mean and variance for past lifetime classes (orderings). In addition, we consider an (n - r + 1) -out-of- n system with identical components where it is assumed that the lifetimes of the components are i.i.d. We assume that the system fails before time x, x > 0. Under these conditions, we are interested in studying the variance time elapsed since the failure of the components. Several properties of this function are studied and an example is provided. Finally, some applications in economic theory are described with real data.
The application of analysis of variance (ANOVA) to different experimental designs in optometry.
Armstrong, R A; Eperjesi, F; Gilmartin, B
2002-05-01
Analysis of variance (ANOVA) is the most efficient method available for the analysis of experimental data. Analysis of variance is a method of considerable complexity and subtlety, with many different variations, each of which applies in a particular experimental context. Hence, it is possible to apply the wrong type of ANOVA to data and, therefore, to draw an erroneous conclusion from an experiment. This article reviews the types of ANOVA most likely to arise in clinical experiments in optometry including the one-way ANOVA ('fixed' and 'random effect' models), two-way ANOVA in randomised blocks, three-way ANOVA, and factorial experimental designs (including the varieties known as 'split-plot' and 'repeated measures'). For each ANOVA, the appropriate experimental design is described, a statistical model is formulated, and the advantages and limitations of each type of design discussed. In addition, the problems of non-conformity to the statistical model and determination of the number of replications are considered.
de Deckerk, Arnaud; Lee, John Aldo; Verlysen, Michel
2009-01-01
Denoising is a key step in the processing of medical images. It aims at improving both the interpretability and visual aspect of the images. Yet, designing a robust and efficient denoising tool remains an unsolved challenge and a specific issue concerns the noise model. Many filters typically assume that noise is additive and Gaussian, with uniform variance. In contrast, noise in medical images often has more complex properties. This paper considers images with Poissonian noise and the patch-based bilateral filters, that is, filters that involve a tonal kernel and pair wise comparisons between shifted blocks of the images. The main aim is then to integrate two variance stabilizing transformations that allow the filters to work with Gaussianized noise. The performances of these filters are compared to those of the classical bilateral filter with the same transformations. The experiments include an artificial benchmark as well as a positron emission tomography image.
Quantitative Genetic Analysis of Temperature Regulation in MUS MUSCULUS. I. Partitioning of Variance
Lacy, Robert C.; Lynch, Carol Becker
1979-01-01
Heritabilities (from parent-offspring regression) and intraclass correlations of full sibs for a variety of traits were estimated from 225 litters of a heterogeneous stock (HS/Ibg) of laboratory mice. Initial variance partitioning suggested different adaptive functions for physiological, morphological and behavioral adjustments with respect to their thermoregulatory significance. Metabolic heat-production mechanisms appear to have reached their genetic limits, with little additive genetic variance remaining. This study provided no genetic evidence that body size has a close directional association with fitness in cold environments, since heritability estimates for weight gain and adult weight were similar and high, whether or not the animals were exposed to cold. Behavioral heat conservation mechanisms also displayed considerable amounts of genetic variability. However, due to strong evidence from numerous other studies that behavior serves an important adaptive role for temperature regulation in small mammals, we suggest that fluctuating selection pressures may have acted to maintain heritable variation in these traits. PMID:17248909
10 CFR 52.93 - Exemptions and variances.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 2 2010-01-01 2010-01-01 false Exemptions and variances. 52.93 Section 52.93 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS... referencing a nuclear power reactor manufactured under a manufacturing license issued under subpart F of...
10 CFR 52.93 - Exemptions and variances.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 2 2011-01-01 2011-01-01 false Exemptions and variances. 52.93 Section 52.93 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS... referencing a nuclear power reactor manufactured under a manufacturing license issued under subpart F of...
10 CFR 52.93 - Exemptions and variances.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 2 2014-01-01 2014-01-01 false Exemptions and variances. 52.93 Section 52.93 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS... referencing a nuclear power reactor manufactured under a manufacturing license issued under subpart F of...
10 CFR 52.93 - Exemptions and variances.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 2 2013-01-01 2013-01-01 false Exemptions and variances. 52.93 Section 52.93 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS... referencing a nuclear power reactor manufactured under a manufacturing license issued under subpart F of...
10 CFR 52.93 - Exemptions and variances.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 2 2012-01-01 2012-01-01 false Exemptions and variances. 52.93 Section 52.93 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS... referencing a nuclear power reactor manufactured under a manufacturing license issued under subpart F of...
Variance Components for NLS: Partitioning the Design Effect.
ERIC Educational Resources Information Center
Folsom, Ralph E., Jr.
This memorandum demonstrates a variance components methodology for partitioning the overall design effect (D) for a ratio mean into stratification (S), unequal weighting (W), and clustering (C) effects, so that D = WSC. In section 2, a sample selection scheme modeled after the National Longitudinal Study of the High School Class of 1972 (NKS)…
Allan Variance Calculation for Nonuniformly Spaced Input Data
2015-01-01
Approved for public release; distribution is unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT The Allan Variance ( AV ) characterizes the...temporal randomness in sensor output data streams at various times scales. The conventional formula for calculating the AV assumes that the data...presents a modified approach to AV calculation, which accommodates nonuniformly spaced time samples. The basic concept of the modified approach is
Variance in Math Achievement Attributable to Visual Cognitive Constructs
ERIC Educational Resources Information Center
Oehlert, Jeremy J.
2012-01-01
Previous research has reported positive correlations between math achievement and the cognitive constructs of spatial visualization, working memory, and general intelligence; however, no single study has assessed variance in math achievement attributable to all three constructs, examined in combination. The current study fills this gap in the…
Temporal Relation Extraction in Outcome Variances of Clinical Pathways.
Yamashita, Takanori; Wakata, Yoshifumi; Hamai, Satoshi; Nakashima, Yasuharu; Iwamoto, Yukihide; Franagan, Brendan; Nakashima, Naoki; Hirokawa, Sachio
2015-01-01
Recently the clinical pathway has progressed with digitalization and the analysis of activity. There are many previous studies on the clinical pathway but not many feed directly into medical practice. We constructed a mind map system that applies the spanning tree. This system can visualize temporal relations in outcome variances, and indicate outcomes that affect long-term hospitalization.
40 CFR 142.43 - Disposition of a variance request.
Code of Federal Regulations, 2010 CFR
2010-07-01
....43 Section 142.43 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the... issue a denial. Such notice shall include a statement of reasons for the proposed denial, and...
Numbers Of Degrees Of Freedom Of Allan-Variance Estimators
NASA Technical Reports Server (NTRS)
Greenhall, Charles A.
1992-01-01
Report discusses formulas for estimation of Allan variances. Presents algorithms for closed-form approximations of numbers of degrees of freedom characterizing results obtained when various estimators applied to five power-law components of classical mathematical model of clock noise.
Code of Federal Regulations, 2013 CFR
2013-07-01
... PUBLIC CONTRACTS, DEPARTMENT OF LABOR 204-SAFETY AND HEALTH STANDARDS FOR FEDERAL SUPPLY CONTRACTS Scope... Public Contracts Act and the Occupational Safety and Health Act of 1970. ... 41 Public Contracts and Property Management 1 2013-07-01 2013-07-01 false Variances....
Code of Federal Regulations, 2014 CFR
2014-07-01
... PUBLIC CONTRACTS, DEPARTMENT OF LABOR 204-SAFETY AND HEALTH STANDARDS FOR FEDERAL SUPPLY CONTRACTS Scope... Public Contracts Act and the Occupational Safety and Health Act of 1970. ... 41 Public Contracts and Property Management 1 2014-07-01 2014-07-01 false Variances....
Code of Federal Regulations, 2010 CFR
2010-07-01
... PUBLIC CONTRACTS, DEPARTMENT OF LABOR 204-SAFETY AND HEALTH STANDARDS FOR FEDERAL SUPPLY CONTRACTS Scope... Public Contracts Act and the Occupational Safety and Health Act of 1970. ... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Variances....
Code of Federal Regulations, 2012 CFR
2012-07-01
... PUBLIC CONTRACTS, DEPARTMENT OF LABOR 204-SAFETY AND HEALTH STANDARDS FOR FEDERAL SUPPLY CONTRACTS Scope... Public Contracts Act and the Occupational Safety and Health Act of 1970. ... 41 Public Contracts and Property Management 1 2012-07-01 2009-07-01 true Variances....
The Variance of Intraclass Correlations in Three and Four Level
ERIC Educational Resources Information Center
Hedges, Larry V.; Hedberg, Eric C.; Kuyper, Arend M.
2012-01-01
Intraclass correlations are used to summarize the variance decomposition in popula- tions with multilevel hierarchical structure. There has recently been considerable interest in estimating intraclass correlations from surveys or designed experiments to provide design parameters for planning future large-scale randomized experiments. The large…
Genetic Variance in the SES-IQ Correlation.
ERIC Educational Resources Information Center
Eckland, Bruce K.
1979-01-01
Discusses questions dealing with genetic aspects of the correlation between IQ and socioeconomic status (SES). Questions include: How does assortative mating affect the genetic variance of IQ? Is the relationship between an individual's IQ and adult SES a causal one? And how can IQ research improve schools and schooling? (Author/DB)
40 CFR 190.11 - Variances for unusual operations.
Code of Federal Regulations, 2013 CFR
2013-07-01
....11 Section 190.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified...
40 CFR 190.11 - Variances for unusual operations.
Code of Federal Regulations, 2012 CFR
2012-07-01
....11 Section 190.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified...
40 CFR 190.11 - Variances for unusual operations.
Code of Federal Regulations, 2011 CFR
2011-07-01
....11 Section 190.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified...
40 CFR 190.11 - Variances for unusual operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
....11 Section 190.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified...
40 CFR 190.11 - Variances for unusual operations.
Code of Federal Regulations, 2014 CFR
2014-07-01
....11 Section 190.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified...
Infinite variance in fermion quantum Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.
21 CFR 821.2 - Exemptions and variances.
Code of Federal Regulations, 2011 CFR
2011-04-01
... and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES MEDICAL DEVICE TRACKING REQUIREMENTS General Provisions § 821.2 Exemptions and variances. (a) A... following: (1) The name of the device and device class and representative labeling showing the intended...
21 CFR 821.2 - Exemptions and variances.
Code of Federal Regulations, 2014 CFR
2014-04-01
... and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES MEDICAL DEVICE TRACKING REQUIREMENTS General Provisions § 821.2 Exemptions and variances. (a) A... following: (1) The name of the device and device class and representative labeling showing the intended...
21 CFR 821.2 - Exemptions and variances.
Code of Federal Regulations, 2010 CFR
2010-04-01
... and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES MEDICAL DEVICE TRACKING REQUIREMENTS General Provisions § 821.2 Exemptions and variances. (a) A... following: (1) The name of the device and device class and representative labeling showing the intended...
Dominance, Information, and Hierarchical Scaling of Variance Space.
ERIC Educational Resources Information Center
Ceurvorst, Robert W.; Krus, David J.
1979-01-01
A method for computation of dominance relations and for construction of their corresponding hierarchical structures is presented. The link between dominance and variance allows integration of the mathematical theory of information with least squares statistical procedures without recourse to logarithmic transformations of the data. (Author/CTM)
Explaining Common Variance Shared by Early Numeracy and Literacy
ERIC Educational Resources Information Center
Davidse, N. J.; De Jong, M. T.; Bus, A. G.
2014-01-01
How can it be explained that early literacy and numeracy share variance? We specifically tested whether the correlation between four early literacy skills (rhyming, letter knowledge, emergent writing, and orthographic knowledge) and simple sums (non-symbolic and story condition) reduced after taking into account preschool attention control,…
The Threat of Common Method Variance Bias to Theory Building
ERIC Educational Resources Information Center
Reio, Thomas G., Jr.
2010-01-01
The need for more theory building scholarship remains one of the pressing issues in the field of HRD. Researchers can employ quantitative, qualitative, and/or mixed methods to support vital theory-building efforts, understanding however that each approach has its limitations. The purpose of this article is to explore common method variance bias as…
Analysis of Variance: What Is Your Statistical Software Actually Doing?
ERIC Educational Resources Information Center
Li, Jian; Lomax, Richard G.
2011-01-01
Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…
40 CFR 52.1390 - Missoula variance provision.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 4 2014-07-01 2014-07-01 false Missoula variance provision. 52.1390 Section 52.1390 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS... from any requirement of an applicable implementation plan with respect to a stationary source....
Comparison of Turbulent Thermal Diffusivity and Scalar Variance Models
NASA Technical Reports Server (NTRS)
Yoder, Dennis A.
2016-01-01
In this study, several variable turbulent Prandtl number formulations are examined for boundary layers, pipe flow, and axisymmetric jets. The model formulations include simple algebraic relations between the thermal diffusivity and turbulent viscosity as well as more complex models that solve transport equations for the thermal variance and its dissipation rate. Results are compared with available data for wall heat transfer and profile measurements of mean temperature, the root-mean-square (RMS) fluctuating temperature, turbulent heat flux and turbulent Prandtl number. For wall-bounded problems, the algebraic models are found to best predict the rise in turbulent Prandtl number near the wall as well as the log-layer temperature profile, while the thermal variance models provide a good representation of the RMS temperature fluctuations. In jet flows, the algebraic models provide no benefit over a constant turbulent Prandtl number approach. Application of the thermal variance models finds that some significantly overpredict the temperature variance in the plume and most underpredict the thermal growth rate of the jet. The models yield very similar fluctuating temperature intensities in jets from straight pipes and smooth contraction nozzles, in contrast to data that indicate the latter should have noticeably higher values. For the particular low subsonic heated jet cases examined, changes in the turbulent Prandtl number had no effect on the centerline velocity decay.
Intuitive Analysis of Variance-- A Formative Assessment Approach
ERIC Educational Resources Information Center
Trumpower, David
2013-01-01
This article describes an assessment activity that can show students how much they intuitively understand about statistics, but also alert them to common misunderstandings. How the activity can be used formatively to help improve students' conceptual understanding of analysis of variance is discussed. (Contains 1 figure and 1 table.)
Unbiased Estimates of Variance Components with Bootstrap Procedures
ERIC Educational Resources Information Center
Brennan, Robert L.
2007-01-01
This article provides general procedures for obtaining unbiased estimates of variance components for any random-model balanced design under any bootstrap sampling plan, with the focus on designs of the type typically used in generalizability theory. The results reported here are particularly helpful when the bootstrap is used to estimate standard…
40 CFR 124.64 - Appeals of variances.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 124.64 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS PROCEDURES...) When a State issues a permit on which EPA has made a variance decision, separate appeals of the State... issues in both proceedings, the Regional Administrator will decide, in consultation with State...
Exploratory Multivariate Analysis of Variance: Contrasts and Variables.
ERIC Educational Resources Information Center
Barcikowski, Robert S.; Elliott, Ronald S.
The contribution of individual variables to overall multivariate significance in a multivariate analysis of variance (MANOVA) is investigated using a combination of canonical discriminant analysis and Roy-Bose simultaneous confidence intervals. Difficulties with this procedure are discussed, and its advantages are illustrated using examples based…
20 CFR 901.40 - Proof; variance; amendment of pleadings.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Proof; variance; amendment of pleadings. 901.40 Section 901.40 Employees' Benefits JOINT BOARD FOR THE ENROLLMENT OF ACTUARIES REGULATIONS GOVERNING THE PERFORMANCE OF ACTUARIAL SERVICES UNDER THE EMPLOYEE RETIREMENT INCOME SECURITY ACT OF...
40 CFR 142.43 - Disposition of a variance request.
Code of Federal Regulations, 2011 CFR
2011-07-01
....43 Section 142.43 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the... issue a denial. Such notice shall include a statement of reasons for the proposed denial, and...
36 CFR 30.5 - Variances, exceptions, and use permits.
Code of Federal Regulations, 2010 CFR
2010-07-01
... OF THE INTERIOR WHISKEYTOWN-SHASTA-TRINITY NATIONAL RECREATION AREA: ZONING STANDARDS FOR WHISKEYTOWN UNIT § 30.5 Variances, exceptions, and use permits. (a) Zoning ordinances or amendments thereto, for the zoning districts comprising the Whiskeytown Unit of the Whiskeytown-Shasta-Trinity...
Infinite variance in fermion quantum Monte Carlo calculations.
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.
44 CFR 60.6 - Variances and exceptions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... pattern inconsistent with the objectives of sound flood plain management, the Federal Insurance... (i) a showing of good and sufficient cause, (ii) a determination that failure to grant the variance... public expense, create nuisances, cause fraud on or victimization of the public, or conflict...
44 CFR 60.6 - Variances and exceptions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... pattern inconsistent with the objectives of sound flood plain management, the Federal Insurance... (i) a showing of good and sufficient cause, (ii) a determination that failure to grant the variance... public expense, create nuisances, cause fraud on or victimization of the public, or conflict...
29 CFR 1905.11 - Variances and other relief under section 6(d).
Code of Federal Regulations, 2010 CFR
2010-07-01
... ADMINISTRATION, DEPARTMENT OF LABOR RULES OF PRACTICE FOR VARIANCES, LIMITATIONS, VARIATIONS, TOLERANCES, AND..., Limitations, Variations, Tolerances, Exemptions and Other Relief § 1905.11 Variances and other relief...
Mucormycosis in India: unique features.
Chakrabarti, Arunaloke; Singh, Rachna
2014-12-01
Mucormycosis remains a devastating invasive fungal infection, with high mortality rates even after active management. The disease is being reported at an alarming frequency over the past decades from India. Indian mucormycosis has certain unique features. Rhino-orbito-cerebral presentation associated with uncontrolled diabetes is the predominant characteristic. Isolated renal mucormycosis has emerged as a new clinical entity. Apophysomyces elegans and Rhizopus homothallicus are emerging species in this region and uncommon agents such as Mucor irregularis and Thamnostylum lucknowense are also being reported. This review focuses on these distinct features of mucormycosis observed in India.
Lithium nephropathy: unique sonographic findings.
Di Salvo, Donald N; Park, Joseph; Laing, Faye C
2012-04-01
This case series describes a unique sonographic appearance consisting of numerous microcysts and punctate echogenic foci seen on renal sonograms of 10 adult patients receiving chronic lithium therapy. Clinically, chronic renal insufficiency was present in 6 and nephrogenic diabetes insipidus in 2. Sonography showed numerous microcysts and punctate echogenic foci. Computed tomography in 5 patients confirmed microcysts and microcalcifications, which were fewer in number than on sonography. Magnetic resonance imaging in 2 patients confirmed microcysts in each case. Renal biopsy in 1 patient showed chronic interstitial nephritis, microcysts, and tubular dilatation. The diagnosis of lithium nephropathy should be considered when sonography shows these findings.
A unique solar marking construct.
Sofaer, A; Zinser, V; Sinclair, R M
1979-10-19
An assembly of stone slabs on an isolated butte in New Mexico collimates sunlight onto spiral petroglyphs carved on a cliff face. The light illuminates the spirals in a changing pattern throughout the year and marks the solstices and equinoxes with particular images. The assembly can also be used to observe lunar phenomena. It is unique in archeoastronomy in utilizing the changing height of the midday sun throughout the year rather than its rising and setting points. The construct appears to be the result of deliberate work of the Anasazi Indians, the builders of the great pueblos in the area.
Gravity Wave Variances and Propagation Derived from AIRS Radiances
NASA Technical Reports Server (NTRS)
Gong, Jie; Wu, Dong L.; Eckermann, S. D.
2012-01-01
As the first gravity wave (GW) climatology study using nadir-viewing infrared sounders, 50 Atmospheric Infrared Sounder (AIRS) radiance channels are selected to estimate GW variances at pressure levels between 2-100 hPa. The GW variance for each scan in the cross-track direction is derived from radiance perturbations in the scan, independently of adjacent scans along the orbit. Since the scanning swaths are perpendicular to the satellite orbits, which are inclined meridionally at most latitudes, the zonal component of GW propagation can be inferred by differencing the variances derived between the westmost and the eastmost viewing angles. Consistent with previous GW studies using various satellite instruments, monthly mean AIRS variance shows large enhancements over meridionally oriented mountain ranges as well as some islands at winter hemisphere high latitudes. Enhanced wave activities are also found above tropical deep convective regions. GWs prefer to propagate westward above mountain ranges, and eastward above deep convection. AIRS 90 field-of-views (FOVs), ranging from +48 deg. to -48 deg. off nadir, can detect large-amplitude GWs with a phase velocity propagating preferentially at steep angles (e.g., those from orographic and convective sources). The annual cycle dominates the GW variances and the preferred propagation directions for all latitudes. Indication of a weak two-year variation in the tropics is found, which is presumably related to the Quasi-biennial oscillation (QBO). AIRS geometry makes its out-tracks capable of detecting GWs with vertical wavelengths substantially shorter than the thickness of instrument weighting functions. The novel discovery of AIRS capability of observing shallow inertia GWs will expand the potential of satellite GW remote sensing and provide further constraints on the GW drag parameterization schemes in the general circulation models (GCMs).
Hydrograph variances over different timescales in hydropower production networks
NASA Astrophysics Data System (ADS)
Zmijewski, Nicholas; Wörman, Anders
2016-08-01
The operation of water reservoirs involves a spectrum of timescales based on the distribution of stream flow travel times between reservoirs, as well as the technical, environmental, and social constraints imposed on the operation. In this research, a hydrodynamically based description of the flow between hydropower stations was implemented to study the relative importance of wave diffusion on the spectrum of hydrograph variance in a regulated watershed. Using spectral decomposition of the effluence hydrograph of a watershed, an exact expression of the variance in the outflow response was derived, as a function of the trends of hydraulic and geomorphologic dispersion and management of production and reservoirs. We show that the power spectra of involved time-series follow nearly fractal patterns, which facilitates examination of the relative importance of wave diffusion and possible changes in production demand on the outflow spectrum. The exact spectral solution can also identify statistical bounds of future demand patterns due to limitations in storage capacity. The impact of the hydraulic description of the stream flow on the reservoir discharge was examined for a given power demand in River Dalälven, Sweden, as function of a stream flow Peclet number. The regulation of hydropower production on the River Dalälven generally increased the short-term variance in the effluence hydrograph, whereas wave diffusion decreased the short-term variance over periods of <1 week, depending on the Peclet number (Pe) of the stream reach. This implies that flow variance becomes more erratic (closer to white noise) as a result of current production objectives.
Variance in the reproductive success of dominant male mountain gorillas.
Robbins, Andrew M; Gray, Maryke; Uwingeli, Prosper; Mburanumwe, Innocent; Kagoda, Edwin; Robbins, Martha M
2014-10-01
Using 30 years of demographic data from 15 groups, this study estimates how harem size, female fertility, and offspring survival may contribute to variance in the siring rates of dominant male mountain gorillas throughout the Virunga Volcano Region. As predicted for polygynous species, differences in harem size were the greatest source of variance in the siring rate, whereas differences in female fertility and offspring survival were relatively minor. Harem size was positively correlated with offspring survival, even after removing all known and suspected cases of infanticide, so the correlation does not seem to reflect differences in the ability of males to protect their offspring. Harem size was not significantly correlated with female fertility, which is consistent with the hypothesis that mountain gorillas have minimal feeding competition. Harem size, offspring survival, and siring rates were not significantly correlated with the proportion of dominant tenures that occurred in multimale groups versus one-male groups; even though infanticide is less likely when those tenures end in multimale groups than one-male groups. In contrast with the relatively small contribution of offspring survival to variance in the siring rates of this study, offspring survival is a major source of variance in the male reproductive success of western gorillas, which have greater predation risks and significantly higher rates of infanticide. If differences in offspring protection are less important among male mountain gorillas than western gorillas, then the relative importance of other factors may be greater for mountain gorillas. Thus, our study illustrates how variance in male reproductive success and its components can differ between closely related species.
ERIC Educational Resources Information Center
Starns, Jeffrey J.; Rotello, Caren M.; Hautus, Michael J.
2014-01-01
We tested the dual process and unequal variance signal detection models by jointly modeling recognition and source confidence ratings. The 2 approaches make unique predictions for the slope of the recognition memory zROC function for items with correct versus incorrect source decisions. The standard bivariate Gaussian version of the unequal…
ERIC Educational Resources Information Center
DeVito, Pasquale John
To investigate the effects of Title I reading programs and the relationships of relevant sets of variables to student achievement, this study sought to determine the unique, and the common, contributions of background, mental ability, program, and parental involvement to the variance in reading comprehension and vocabulary scores for Title I…
Shenk, T.M.; White, Gary C.; Burnham, K.P.
1998-01-01
Monte Carlo simulations were conducted to evaluate robustness of four tests to detect density dependence, from series of population abundances, to the addition of sampling variance. Population abundances were generated from random walk, stochastic exponential growth, and density-dependent population models. Population abundance estimates were generated with sampling variances distributed as lognormal and constant coefficients of variation (cv) from 0.00 to 1.00. In general, when data were generated under a random walk, Type I error rates increased rapidly for Bulmer's R, Pollard et al.'s, and Dennis and Taper's tests with increasing magnitude of sampling variance for n > 5 yr and all values of process variation. Bulmer's R* test maintained a constant 5% Type I error rate for n > 5 yr and all magnitudes of sampling variance in the population abundance estimates. When abundances were generated from two stochastic exponential growth models (R = 0.05 and R = 0.10), Type I errors again increased with increasing sampling variance; magnitude of Type I error rates were higher for the slower growing population. Therefore, sampling error inflated Type I error rates, invalidating the tests, for all except Bulmer's R* test. Comparable simulations for abundance estimates generated from a density-dependent growth rate model were conducted to estimate power of the tests. Type II error rates were influenced by the relationship of initial population size to carrying capacity (K), length of time series, as well as sampling error. Given the inflated Type I error rates for all but Bulmer, s R*, power was overestimated for the remaining tests, resulting in density: dependence being detected more often than it existed. Population abundances of natural populations are almost exclusively estimated rather than censused, assuring sampling error. Therefore, because these tests have been shown to be either invalid when only sampling variance occurs in the population abundances (Bulmer's R
Cernicchiaro, N; Renter, D G; Xiang, S; White, B J; Bello, N M
2013-06-01
Variability in ADG of feedlot cattle can affect profits, thus making overall returns more unstable. Hence, knowledge of the factors that contribute to heterogeneity of variances in animal performance can help feedlot managers evaluate risks and minimize profit volatility when making managerial and economic decisions in commercial feedlots. The objectives of the present study were to evaluate heteroskedasticity, defined as heterogeneity of variances, in ADG of cohorts of commercial feedlot cattle, and to identify cattle demographic factors at feedlot arrival as potential sources of variance heterogeneity, accounting for cohort- and feedlot-level information in the data structure. An operational dataset compiled from 24,050 cohorts from 25 U. S. commercial feedlots in 2005 and 2006 was used for this study. Inference was based on a hierarchical Bayesian model implemented with Markov chain Monte Carlo, whereby cohorts were modeled at the residual level and feedlot-year clusters were modeled as random effects. Forward model selection based on deviance information criteria was used to screen potentially important explanatory variables for heteroskedasticity at cohort- and feedlot-year levels. The Bayesian modeling framework was preferred as it naturally accommodates the inherently hierarchical structure of feedlot data whereby cohorts are nested within feedlot-year clusters. Evidence for heterogeneity of variance components of ADG was substantial and primarily concentrated at the cohort level. Feedlot-year specific effects were, by far, the greatest contributors to ADG heteroskedasticity among cohorts, with an estimated ∼12-fold change in dispersion between most and least extreme feedlot-year clusters. In addition, identifiable demographic factors associated with greater heterogeneity of cohort-level variance included smaller cohort sizes, fewer days on feed, and greater arrival BW, as well as feedlot arrival during summer months. These results support that
NASA Technical Reports Server (NTRS)
Harder, R. L.
1974-01-01
The NASTRAN Thermal Analyzer has been intended to do variance analysis and plot the thermal boundary elements. The objective of the variance analysis addition is to assess the sensitivity of temperature variances resulting from uncertainties inherent in input parameters for heat conduction analysis. The plotting capability provides the ability to check the geometry (location, size and orientation) of the boundary elements of a model in relation to the conduction elements. Variance analysis is the study of uncertainties of the computed results as a function of uncertainties of the input data. To study this problem using NASTRAN, a solution is made for both the expected values of all inputs, plus another solution for each uncertain variable. A variance analysis module subtracts the results to form derivatives, and then can determine the expected deviations of output quantities.
On the measurement of frequency and of its sample variance with high-resolution counters
Rubiola, Enrico
2005-05-15
A frequency counter measures the input frequency {nu} averaged over a suitable time {tau}, versus the reference clock. High resolution is achieved by interpolating the clock signal. Further increased resolution is obtained by averaging multiple frequency measurements highly overlapped. In the presence of additive white noise or white phase noise, the square uncertainty improves from {sigma}{sub {nu}}{sup 2}{proportional_to}1/{tau}{sup 2} to {sigma}{sub {nu}}{sup 2}{proportional_to}1/{tau}{sup 3}. Surprisingly, when a file of contiguous data is fed into the formula of the two-sample (Allan) variance {sigma}{sub y}{sup 2}({tau})=E{l_brace}(1/2)(y{sub k+1}-y{sub k}){sup 2}{r_brace} of the fractional frequency fluctuation y, the result is the modified Allan variance mod {sigma}{sub y}{sup 2}({tau}). But if a sufficient number of contiguous measures are averaged in order to get a longer {tau} and the data are fed into the same formula, the results is the (nonmodified) Allan variance. Of course interpretation mistakes are around the corner if the counter internal process is not well understood. The typical domain of interest is the the short-term stability measurement of oscillators.
The Efficiency of Split Panel Designs in an Analysis of Variance Model
Wang, Wei-Guo; Liu, Hai-Jun
2016-01-01
We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447
The Efficiency of Split Panel Designs in an Analysis of Variance Model.
Liu, Xin; Wang, Wei-Guo; Liu, Hai-Jun
2016-01-01
We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm's efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution.
Full pedigree quantitative trait locus analysis in commercial pigs using variance components.
de Koning, D J; Pong-Wong, R; Varona, L; Evans, G J; Giuffra, E; Sanchez, A; Plastow, G; Noguera, J L; Andersson, L; Haley, C S
2003-09-01
In commercial livestock populations, QTL detection methods often use existing half-sib family structures and ignore additional relationships within and between families. We reanalyzed the data from a large QTL confirmation experiment with 10 pig lines and 10 chromosome regions using identity-by-descent (IBD) scores and variance component analyses. The IBD scores were obtained using a Monte Carlo Markov Chain method, as implemented in the LOKI software, and were used to model a putative QTL in a mixed animal model. The analyses revealed 61 QTL at a nominal 5% level (out of 650 tests). Twenty-seven QTL mapped to areas where QTL have been reported, and eight of these exceeded the threshold to claim confirmed linkage (P < 0.01). Forty-two of the putative QTL were detected previously using half-sib analyses, whereas 46 QTL previously identified by half-sib analyses could not be confirmed using the variance component approach. Some of the differences could be traced back to the underlying assumptions between the two methods. Using a deterministic approach to estimate IBD scores on a subset of the data gave very similar results to LOKI. We have demonstrated the feasibility of applying variance component QTL analysis to a large amount of data, equivalent to a genome scan. In many situations, the deterministic IBD approach offers a fast alternative to LOKI.
Deconvolution of non-stationary physical signals: a smooth variance model for insulin secretion rate
NASA Astrophysics Data System (ADS)
Pillonetto, Gianluigi; Bell, Bradley M.
2004-04-01
Deconvolution is the process of estimating a system's input using measurements of a causally related output where the relationship between the input and output is known and linear. Regularization parameters are used to balance smoothness of the estimated input with accuracy of the measurement values. In this paper we present a maximum marginal likelihood method for estimating unknown regularization (and other) parameters used during deconvolution of dynamical systems. Our computational approach uses techniques that were developed for Kalman filters and smoothers. As an example application we consider estimating insulin secretion rate (ISR) following an intravenous glucose stimulus. This procedure is referred to in the medical literature as an intravenous glucose tolerance test (IVGTT). This estimation problem is difficult because ISR is a strongly non-stationary signal; it presents a fast peak in the first minutes of the experiment, followed by a smoother release. We use three regularization parameters to define a smooth model for ISR variance. This model takes into account the rapid variation of ISR during the beginning of an IVGTT and its slower variation as time progresses. Simulations are used to assess marginal likelihood estimation of these regularization parameters as well as of other parameters in the system. Simulations are also used to compare our model for ISR variance with other stochastic ISR models. In addition, we apply maximum marginal likelihood and our ISR variance model to real data that have previous ISR estimation results reported in the literature.
Unique features of space reactors
Buden, D.
1990-01-01
Space reactors are designed to meet a unique set of requirements; they must be sufficiently compact to be launched in a rocket to their operational location, operate for many years without maintenance and servicing, operate in extreme environments, and reject heat by radiation to space. To meet these restrictions, operating temperatures are much greater than in terrestrial power plants, and the reactors tend to have a fast neutron spectrum. Currently, a new generation of space reactor power plants is being developed. The major effort is in the SP-100 program, where the power plant is being designed for seven years of full power, and no maintenance operation at a reactor outlet operating temperature of 1350 K. 8 refs., 3 figs., 1 tab.
The Milieu Intérieur study - an integrative approach for study of human immunological variance.
Thomas, Stéphanie; Rouilly, Vincent; Patin, Etienne; Alanio, Cécile; Dubois, Annick; Delval, Cécile; Marquier, Louis-Guillaume; Fauchoux, Nicolas; Sayegrih, Seloua; Vray, Muriel; Duffy, Darragh; Quintana-Murci, Lluis; Albert, Matthew L
2015-04-01
The Milieu Intérieur Consortium has established a 1000-person healthy population-based study (stratified according to sex and age), creating an unparalleled opportunity for assessing the determinants of human immunologic variance. Herein, we define the criteria utilized for participant enrollment, and highlight the key data that were collected for correlative studies. In this report, we analyzed biological correlates of sex, age, smoking-habits, metabolic score and CMV infection. We characterized and identified unique risk factors among healthy donors, as compared to studies that have focused on the general population or disease cohorts. Finally, we highlight sex-bias in the thresholds used for metabolic score determination and recommend a deeper examination of current guidelines. In sum, our clinical design, standardized sample collection strategies, and epidemiological data analyses have established the foundation for defining variability within human immune responses.
Bayesian hierarchical analysis of within-units variances in repeated measures experiments.
Ten Have, T R; Chinchilli, V M
1994-09-30
We develop hierarchical Bayesian models for biomedical data that consist of multiple measurements on each individual under each of several conditions. The focus is on investigating differences in within-subject variation between conditions. We present both population-level and individual-level comparisons. We extend the partial likelihood models of Chinchilli et al. with a unique Bayesian hierarchical framework for variance components and associated degrees of freedom. We use the Gibbs sampler to estimate posterior marginal distributions for the parameters of the Bayesian hierarchical models. The application involves a comparison of two cholesterol analysers each applied repeatedly to a sample of subjects. Both the partial likelihood and Bayesian approaches yield similar results, although confidence limits tend to be wider under the Bayesian models.
Fidelity between Gaussian mixed states with quantum state quadrature variances
NASA Astrophysics Data System (ADS)
Hai-Long, Zhang; Chun, Zhou; Jian-Hong, Shi; Wan-Su, Bao
2016-04-01
In this paper, from the original definition of fidelity in a pure state, we first give a well-defined expansion fidelity between two Gaussian mixed states. It is related to the variances of output and input states in quantum information processing. It is convenient to quantify the quantum teleportation (quantum clone) experiment since the variances of the input (output) state are measurable. Furthermore, we also give a conclusion that the fidelity of a pure input state is smaller than the fidelity of a mixed input state in the same quantum information processing. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002) and the Foundation of Science and Technology on Information Assurance Laboratory (Grant No. KJ-14-001).
Variable variance Preisach model for multilayers with perpendicular magnetic anisotropy
NASA Astrophysics Data System (ADS)
Franco, A. F.; Gonzalez-Fuentes, C.; Morales, R.; Ross, C. A.; Dumas, R.; Åkerman, J.; Garcia, C.
2016-08-01
We present a variable variance Preisach model that fully accounts for the different magnetization processes of a multilayer structure with perpendicular magnetic anisotropy by adjusting the evolution of the interaction variance as the magnetization changes. We successfully compare in a quantitative manner the results obtained with this model to experimental hysteresis loops of several [CoFeB/Pd ] n multilayers. The effect of the number of repetitions and the thicknesses of the CoFeB and Pd layers on the magnetization reversal of the multilayer structure is studied, and it is found that many of the observed phenomena can be attributed to an increase of the magnetostatic interactions and subsequent decrease of the size of the magnetic domains. Increasing the CoFeB thickness leads to the disappearance of the perpendicular anisotropy, and such a minimum thickness of the Pd layer is necessary to achieve an out-of-plane magnetization.
Variance reduction methods applied to deep-penetration problems
Cramer, S.N.
1984-01-01
All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course.
Compounding approach for univariate time series with nonstationary variances
NASA Astrophysics Data System (ADS)
Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich
2015-12-01
A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.
Analysis of variance in spectroscopic imaging data from human tissues.
Kwak, Jin Tae; Reddy, Rohith; Sinha, Saurabh; Bhargava, Rohit
2012-01-17
The analysis of cell types and disease using Fourier transform infrared (FT-IR) spectroscopic imaging is promising. The approach lacks an appreciation of the limits of performance for the technology, however, which limits both researcher efforts in improving the approach and acceptance by practitioners. One factor limiting performance is the variance in data arising from biological diversity, measurement noise or from other sources. Here we identify the sources of variation by first employing a high throughout sampling platform of tissue microarrays (TMAs) to record a sufficiently large and diverse set data. Next, a comprehensive set of analysis of variance (ANOVA) models is employed to analyze the data. Estimating the portions of explained variation, we quantify the primary sources of variation, find the most discriminating spectral metrics, and recognize the aspects of the technology to improve. The study provides a framework for the development of protocols for clinical translation and provides guidelines to design statistically valid studies in the spectroscopic analysis of tissue.
Climate variance influence on the non-stationary plankton dynamics.
Molinero, Juan Carlos; Reygondeau, Gabriel; Bonnet, Delphine
2013-08-01
We examined plankton responses to climate variance by using high temporal resolution data from 1988 to 2007 in the Western English Channel. Climate variability modified both the magnitude and length of the seasonal signal of sea surface temperature, as well as the timing and depth of the thermocline. These changes permeated the pelagic system yielding conspicuous modifications in the phenology of autotroph communities and zooplankton. The climate variance envelope, thus far little considered in climate-plankton studies, is closely coupled with the non-stationary dynamics of plankton, and sheds light on impending ecological shifts and plankton structural changes. Our study calls for the integration of the non-stationary relationship between climate and plankton in prognostic models on the productivity of marine ecosystems.
A surface layer variance heat budget for ENSO
NASA Astrophysics Data System (ADS)
Boucharel, Julien; Timmermann, Axel; Santoso, Agus; England, Matthew H.; Jin, Fei-Fei; Balmaseda, Magdalena A.
2015-05-01
Characteristics of the El Niño-Southern Oscillation (ENSO), such as frequency, propagation, spatial extent, and amplitude, strongly depend on the climatological background state of the tropical Pacific. Multidecadal changes in the ocean mean state are hence likely to modulate ENSO properties. To better link background state variations with low-frequency amplitude changes of ENSO, we develop a diagnostic framework that determines locally the contributions of different physical feedback terms on the ocean surface temperature variance. Our analysis shows that multidecadal changes of ENSO variance originate from the delicate balance between the background-state-dependent positive thermocline feedback and the atmospheric damping of sea surface temperatures anomalies. The role of higher-order processes and atmospheric and oceanic nonlinearities is also discussed. The diagnostic tool developed here can be easily applied to other tropical ocean areas and climate phenomena.
Methods for variance reduction in Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Bixler, Joel N.; Hokr, Brett H.; Winblad, Aidan; Elpers, Gabriel; Zollars, Byron; Thomas, Robert J.
2016-03-01
Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, due to the probabilistic nature of these simulations, large numbers of photons are often required in order to generate relevant results. Here, we present methods for reduction in the variance of dose distribution in a computational volume. Dose distribution is computed via tracing of a large number of rays, and tracking the absorption and scattering of the rays within discrete voxels that comprise the volume. Variance reduction is shown here using quasi-random sampling, interaction forcing for weakly scattering media, and dose smoothing via bi-lateral filtering. These methods, along with the corresponding performance enhancements are detailed here.
Response variance in functional maps: neural darwinism revisited.
Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei
2013-01-01
The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.
NASA Astrophysics Data System (ADS)
Bielewicz, P.; Wandelt, B. D.; Banday, A. J.
2013-02-01
We present a method for the computation of the variance of cosmic microwave background (CMB) temperature maps on azimuthally symmetric patches using a fast convolution approach. As an example of the application of the method, we show results for the search for concentric rings with unusual variance in the 7-year Wilkinson Microwave Anisotropy Probe (WMAP) data. We re-analyse claims concerning the unusual variance profile of rings centred at two locations on the sky that have recently drawn special attention in the context of the conformal cyclic cosmology scenario proposed by Penrose. We extend this analysis to rings with larger radii and centred on other points of the sky. Using the fast convolution technique enables us to perform this search with higher resolution and a wider range of radii than in previous studies. We show that for one of the two special points rings with radii larger than 10° have systematically lower variance in comparison to the concordance Λ cold dark matter model predictions. However, we show that this deviation is caused by the multipoles up to order ℓ = 7. Therefore, the deficit of power for concentric rings with larger radii is yet another manifestation of the well-known anomalous CMB distribution on large angular scales. Furthermore, low-variance rings can be easily found centred on other points in the sky. In addition, we show also the results of a search for extremely high-variance rings. As for the low-variance rings, some anomalies seem to be related to the anomalous distribution of the low-order multipoles of the WMAP CMB maps. As such our results are not consistent with the conformal cyclic cosmology scenario.
Analysis of Variance in the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
Deloach, Richard
2010-01-01
This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.
The Third-Difference Approach to Modified Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
1995-01-01
This study gives strategies for estimating the modified Allan variance (mvar) and formulas for computing the equivalent degrees of freedom (edf) of the estimators. A third-difference formulation of mvar leads to a tractable formula for edf in the presence of power-law phase noise. The effect of estimation stride on edf is tabulated. First-degree rational-function approximations for edf are derived.
Evaluation of climate modeling factors impacting the variance of streamflow
NASA Astrophysics Data System (ADS)
Al Aamery, N.; Fox, J. F.; Snyder, M.
2016-11-01
The present contribution quantifies the relative importance of climate modeling factors and chosen response variables upon controlling the variance of streamflow forecasted with global climate model (GCM) projections, which has not been attempted in previous literature to our knowledge. We designed an experiment that varied climate modeling factors, including GCM type, project phase, emission scenario, downscaling method, and bias correction. The streamflow response variable was also varied and included forecasted streamflow and difference in forecast and hindcast streamflow predictions. GCM results and the Soil Water Assessment Tool (SWAT) were used to predict streamflow for a wet, temperate watershed in central Kentucky USA. After calibrating the streamflow model, 112 climate realizations were simulated within the streamflow model and then analyzed on a monthly basis using analysis of variance. Analysis of variance results indicate that the difference in forecast and hindcast streamflow predictions is a function of GCM type, climate model project phase, and downscaling approach. The prediction of forecasted streamflow is a function of GCM type, project phase, downscaling method, emission scenario, and bias correction method. The results indicate the relative importance of the five climate modeling factors when designing streamflow prediction ensembles and quantify the reduction in uncertainty associated with coupling the climate results with the hydrologic model when subtracting the hindcast simulations. Thereafter, analysis of streamflow prediction ensembles with different numbers of realizations show that use of all available realizations is unneeded for the study system, so long as the ensemble design is well balanced. After accounting for the factors controlling streamflow variance, results show that predicted average monthly change in streamflow tends to follow precipitation changes and result in a net increase in the average annual precipitation and
Stochastic variance models in discrete time with feedforward neural networks.
Andoh, Charles
2009-07-01
The study overcomes the estimation difficulty in stochastic variance models for discrete financial time series with feedforward neural networks. The volatility function is estimated semiparametrically. The model is used to estimate market risk, taking into account not only the time series of interest but extra information on the market. As an application, some stock prices series are studied and compared with the nonlinear ARX-ARCHX model.
Relationship between Allan variances and Kalman Filter parameters
NASA Technical Reports Server (NTRS)
Vandierendonck, A. J.; Mcgraw, J. B.; Brown, R. G.
1984-01-01
A relationship was constructed between the Allan variance parameters (H sub z, H sub 1, H sub 0, H sub -1 and H sub -2) and a Kalman Filter model that would be used to estimate and predict clock phase, frequency and frequency drift. To start with the meaning of those Allan Variance parameters and how they are arrived at for a given frequency source is reviewed. Although a subset of these parameters is arrived at by measuring phase as a function of time rather than as a spectral density, they all represent phase noise spectral density coefficients, though not necessarily that of a rational spectral density. The phase noise spectral density is then transformed into a time domain covariance model which can then be used to derive the Kalman Filter model parameters. Simulation results of that covariance model are presented and compared to clock uncertainties predicted by Allan variance parameters. A two state Kalman Filter model is then derived and the significance of each state is explained.
Reduced Variance for Material Sources in Implicit Monte Carlo
Urbatsch, Todd J.
2012-06-25
Implicit Monte Carlo (IMC), a time-implicit method due to Fleck and Cummings, is used for simulating supernovae and inertial confinement fusion (ICF) systems where x-rays tightly and nonlinearly interact with hot material. The IMC algorithm represents absorption and emission within a timestep as an effective scatter. Similarly, the IMC time-implicitness splits off a portion of a material source directly into the radiation field. We have found that some of our variance reduction and particle management schemes will allow large variances in the presence of small, but important, material sources, as in the case of ICF hot electron preheat sources. We propose a modification of our implementation of the IMC method in the Jayenne IMC Project. Instead of battling the sampling issues associated with a small source, we bypass the IMC implicitness altogether and simply deterministically update the material state with the material source if the temperature of the spatial cell is below a user-specified cutoff. We describe the modified method and present results on a test problem that show the elimination of variance for small sources.
Sample variance of non-Gaussian sky distributions
NASA Astrophysics Data System (ADS)
Luo, Xiaochun
1995-02-01
Non-Gaussian distributions of cosmic microwave background (CMB) anistropics have been proposed to reconcile the discrepancies between different experiments at half-degree scales (Coulson et al. 1994). Each experiment probes a different part of the sky, furthermore, sky coverage is very small, hence the sample variance of each experiment can be large, especially when the sky signal is non-Gaussian. We model the degree-scale CMB sky as a chin exp 2 field with n-degress of freedom and show that the sample variance is enhanced over that a Gaussian distribution by a factor of (n + 6)/n. The sample variance for different experiments are calculated, both for Gaussian and non-Gaussian distributions. We also show that if the distribution is highly non-Gaussian (n less than or approximately = 4) at half-degree scales, than the non-Gaussian signature of the CMB could be detected in the FIRS map, though probably not in the Cosmic Background Explorer (COBE) map.
Cosmic variance of the galaxy cluster weak lensing signal
Gruen, D.; Seitz, S.; Becker, M. R.; ...
2015-04-13
Intrinsic variations of the projected density profiles of clusters of galaxies at fixed mass are a source of uncertainty for cluster weak lensing. We present a semi-analytical model to account for this effect, based on a combination of variations in halo concentration, ellipticity and orientation, and the presence of correlated haloes. We calibrate the parameters of our model at the 10 per cent level to match the empirical cosmic variance of cluster profiles at M200m ≈ 1014…1015h–1M⊙, z = 0.25…0.5 in a cosmological simulation. We show that weak lensing measurements of clusters significantly underestimate mass uncertainties if intrinsic profile variationsmore » are ignored, and that our model can be used to provide correct mass likelihoods. Effects on the achievable accuracy of weak lensing cluster mass measurements are particularly strong for the most massive clusters and deep observations (with ≈20 per cent uncertainty from cosmic variance alone at M200m ≈ 1015h–1M⊙ and z = 0.25), but significant also under typical ground-based conditions. We show that neglecting intrinsic profile variations leads to biases in the mass-observable relation constrained with weak lensing, both for intrinsic scatter and overall scale (the latter at the 15 per cent level). Furthermore, these biases are in excess of the statistical errors of upcoming surveys and can be avoided if the cosmic variance of cluster profiles is accounted for.« less
Analysis of micrometeorological data using a two sample variance
NASA Astrophysics Data System (ADS)
Werle, Peter; Falge, Eva
2010-05-01
In ecosystem research infrared gas analyzers are increasingly used to measure fluxes of carbon dioxide, water vapour, methane, nitrous oxide and even stable carbon isotopes. As these complex measurement devices under field conditions cannot be considered as absolutely stable, drift characterisation is an issue to distinguish between atmospheric data and sensor drift. In this paper the concept of the two sample variance is utilized in analogy to previous stability investigations to characterize the stationarity of both, spectroscopic measurements of concentration time series and micrometeorological data in the time domain, which is a prerequisite for covariance calculations. As an example, the method is applied to assess the time constant for detrending of time series data and the optimum trace gas flux integration time. The method described here provides information similar to existing characterizations as the ogive analysis, the normalized error variance of the second order moment and the spectral characteristics of turbulence in the inertial subrange. The method is easy to implement and, therefore, well suited to assist as a useful tool for a routine data quality check for both, new practitioners and experts in the field. Werle, P., Time domain characterization of micrometeorological data based on a two sample variance. Agric. Forest Meteorol. (2010), doi:10.1016/j.agrformet.2009.12.007
42 CFR 488.64 - Remote facility variances for utilization review requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... applicable. (c) The request for variance shall document the requesting facility's inability to meet the... the previous six months; (4) As relevant to the request, the names of all physicians on the active... variance. (h) The Secretary, in granting a variance, will specify the period for which the variance...
Minimum variance system identification with application to digital adaptive flight control
NASA Technical Reports Server (NTRS)
Kotob, S.; Kaufman, H.
1975-01-01
A new on-line minimum variance filter for the identification of systems with additive and multiplicative noise is described which embodies both accuracy and computational efficiency. The resulting filter is shown to use both the covariance of the parameter vector itself and the covariance of the error in identification. A bias reduction scheme can be used to yield asymptotically unbiased estimates. Experimental results for simulated linearized lateral aircraft motion in a digital closed loop mode are presented, showing the utility of the identification schemes.
Astronomy Outreach for Large and Unique Audiences
NASA Astrophysics Data System (ADS)
Lubowich, D.; Sparks, R. T.; Pompea, S. M.; Kendall, J. S.; Dugan, C.
2013-04-01
In this session, we discuss different approaches to reaching large audiences. In addition to star parties and astronomy events, the audiences for some of the events include music concerts or festivals, sick children and their families, minority communities, American Indian reservations, and tourist sites such as the National Mall. The goal is to bring science directly to the public—to people who attend astronomy events and to people who do not come to star parties, science museums, or science festivals. These programs allow the entire community to participate in astronomy activities to enhance the public appreciation of science. These programs attract large enthusiastic crowds often with young children participating in these family learning experiences. The public will become more informed, educated, and inspired about astronomy and will also be provided with information that will allow them to continue to learn after this outreach activity. Large and unique audiences often have common problems, and their solutions and the lessons learned will be presented. Interaction with the participants in this session will provide important community feedback used to improve astronomy outreach for large and unique audiences. New ways to expand astronomy outreach to new large audiences will be discussed.
Loberg, A; Dürr, J W; Fikse, W F; Jorjani, H; Crooks, L
2015-10-01
The amount of variance captured in genetic estimations may depend on whether a pedigree-based or genomic relationship matrix is used. The purpose of this study was to investigate the genetic variance as well as the variance of predicted genetic merits (PGM) using pedigree-based or genomic relationship matrices in Brown Swiss cattle. We examined a range of traits in six populations amounting to 173 population-trait combinations. A main aim was to determine how using different relationship matrices affect variance estimation. We calculated ratios between different types of estimates and analysed the impact of trait heritability and population size. The genetic variances estimated by REML using a genomic relationship matrix were always smaller than the variances that were similarly estimated using a pedigree-based relationship matrix. The variances from the genomic relationship matrix became closer to estimates from a pedigree relationship matrix as heritability and population size increased. In contrast, variances of predicted genetic merits obtained using a genomic relationship matrix were mostly larger than variances of genetic merit predicted using pedigree-based relationship matrix. The ratio of the genomic to pedigree-based PGM variances decreased as heritability and population size rose. The increased variance among predicted genetic merits is important for animal breeding because this is one of the factors influencing genetic progress.
A unique element resembling a processed pseudogene.
Robins, A J; Wang, S W; Smith, T F; Wells, J R
1986-01-05
We describe a unique DNA element with structural features of a processed pseudogene but with important differences. It is located within an 8.4-kilobase pair region of chicken DNA containing five histone genes, but it is not related to these genes. The presence of terminal repeats, an open reading frame (and stop codon), polyadenylation/processing signal, and a poly(A) rich region about 20 bases 3' to this, together with a lack of 5' promoter motifs all suggest a processed pseudogene. However, no parent gene can be detected in the genome by Southern blotting experiments and, in addition, codon boundary values and mid-base correlations are not consistent with a protein coding region of a eukaryotic gene. The element was detected in DNA from different chickens and in peafowl, but not in quail, pheasant, or turkey.
Symbols are not uniquely human.
Ribeiro, Sidarta; Loula, Angelo; de Araújo, Ivan; Gudwin, Ricardo; Queiroz, João
2007-01-01
Modern semiotics is a branch of logics that formally defines symbol-based communication. In recent years, the semiotic classification of signs has been invoked to support the notion that symbols are uniquely human. Here we show that alarm-calls such as those used by African vervet monkeys (Cercopithecus aethiops), logically satisfy the semiotic definition of symbol. We also show that the acquisition of vocal symbols in vervet monkeys can be successfully simulated by a computer program based on minimal semiotic and neurobiological constraints. The simulations indicate that learning depends on the tutor-predator ratio, and that apprentice-generated auditory mistakes in vocal symbol interpretation have little effect on the learning rates of apprentices (up to 80% of mistakes are tolerated). In contrast, just 10% of apprentice-generated visual mistakes in predator identification will prevent any vocal symbol to be correctly associated with a predator call in a stable manner. Tutor unreliability was also deleterious to vocal symbol learning: a mere 5% of "lying" tutors were able to completely disrupt symbol learning, invariably leading to the acquisition of incorrect associations by apprentices. Our investigation corroborates the existence of vocal symbols in a non-human species, and indicates that symbolic competence emerges spontaneously from classical associative learning mechanisms when the conditioned stimuli are self-generated, arbitrary and socially efficacious. We propose that more exclusive properties of human language, such as syntax, may derive from the evolution of higher-order domains for neural association, more removed from both the sensory input and the motor output, able to support the gradual complexification of grammatical categories into syntax.
Cavalié, Olivier; Vernotte, François
2016-04-01
The Allan variance was introduced 50 years ago for analyzing the stability of frequency standards. In addition to its metrological interest, it may be also considered as an estimator of the large trends of the power spectral density (PSD) of frequency deviation. For instance, the Allan variance is able to discriminate different types of noise characterized by different power laws in the PSD. The Allan variance was also used in other fields than time and frequency metrology: for more than 20 years, it has been used in accelerometry, geophysics, geodesy, astrophysics, and even finances. However, it seems that up to now, it has been exclusively applied for time series analysis. We propose here to use the Allan variance on spatial data. Interferometric synthetic aperture radar (InSAR) is used in geophysics to image ground displacements in space [over the synthetic aperture radar (SAR) image spatial coverage] and in time thanks to the regular SAR image acquisitions by dedicated satellites. The main limitation of the technique is the atmospheric disturbances that affect the radar signal while traveling from the sensor to the ground and back. In this paper, we propose to use the Allan variance for analyzing spatial data from InSAR measurements. The Allan variance was computed in XY mode as well as in radial mode for detecting different types of behavior for different space-scales, in the same way as the different types of noise versus the integration time in the classical time and frequency application. We found that radial Allan variance is the more appropriate way to have an estimator insensitive to the spatial axis and we applied it on SAR data acquired over eastern Turkey for the period 2003-2011. Spatial Allan variance allowed us to well characterize noise features, classically found in InSAR such as phase decorrelation producing white noise or atmospheric delays, behaving like a random walk signal. We finally applied the spatial Allan variance to an InSAR time
White matter morphometric changes uniquely predict children's reading acquisition.
Myers, Chelsea A; Vandermosten, Maaike; Farris, Emily A; Hancock, Roeland; Gimenez, Paul; Black, Jessica M; Casto, Brandi; Drahos, Miroslav; Tumber, Mandeep; Hendren, Robert L; Hulme, Charles; Hoeft, Fumiko
2014-10-01
This study examined whether variations in brain development between kindergarten and Grade 3 predicted individual differences in reading ability at Grade 3. Structural MRI measurements indicated that increases in the volume of two left temporo-parietal white matter clusters are unique predictors of reading outcomes above and beyond family history, socioeconomic status, and cognitive and preliteracy measures at baseline. Using diffusion MRI, we identified the left arcuate fasciculus and superior corona radiata as key fibers within the two clusters. Bias-free regression analyses using regions of interest from prior literature revealed that volume changes in temporo-parietal white matter, together with preliteracy measures, predicted 56% of the variance in reading outcomes. Our findings demonstrate the important contribution of developmental differences in areas of left dorsal white matter, often implicated in phonological processing, as a sensitive early biomarker for later reading abilities, and by extension, reading difficulties.
Unique contributions of metacognition and cognition to depressive symptoms.
Yilmaz, Adviye Esin; Gençöz, Tülin; Wells, Adrian
2015-01-01
This study attempts to examine the unique contributions of "cognitions" or "metacognitions" to depressive symptoms while controlling for their intercorrelations and comorbid anxiety. Two-hundred-and-fifty-one university students participated in the study. Two complementary hierarchical multiple regression analyses were performed, in which symptoms of depression were regressed on the dysfunctional attitudes (DAS-24 subscales) and metacognition scales (Negative Beliefs about Rumination Scale [NBRS] and Positive Beliefs about Rumination Scale [PBRS]). Results showed that both NBRS and PBRS individually explained a significant amount of variance in depressive symptoms above and beyond dysfunctional schemata while controlling for anxiety. Although dysfunctional attitudes as a set significantly predicted depressive symptoms after anxiety and metacognitions were controlled for, they were weaker than metacognitive variables and none of the DAS-24 subscales contributed individually. Metacognitive beliefs about ruminations appeared to contribute more to depressive symptoms than dysfunctional beliefs in the "cognitive" domain.
Ultrasonic beam fluctuation and flaw signal variance in inhomogeneous media
NASA Astrophysics Data System (ADS)
Ahmed, S.; Roberts, R.; Margetan, F.
2000-05-01
This paper examines the effect of forward scattering on ultrasonic beam propagation and flaw signal amplitude in inhomogeneous material microstructures. A beam propagating through a weakly-scattering, randomly inhomogeneous medium will display random fluctuations in amplitude and phase, attributable to forward scattering. Correspondingly, the signal received from a given flaw at a given position in the beam volume will fluctuate as the beam and flaw are simultaneously scanned throughout the volume of an inhomogeneous host medium. These effects have been prominently observed in the inspection of titanium. For example, maps of beam amplitude profiles after transmission through titanium reveal severe distortion of beam amplitude and phase. Similarly, signals from "identical" flat bottom holes (FBH) at equal depths but different lateral positions in titanium display a random variation in amplitude. Interestingly, it has been noted that this FBH signal variance varies inversely to the beam diameter, that is, signal variance normalized to the mean signal amplitude is a minimum when the flaw is in the focal zone of a focused bearn. As this observation has great significance to the inspection of titanium, a model, prediction of this phenomenon is being sought. In the work reported here, beam propagation is formulated as a volumetric integral equation employing the Green function for the homogeneous spatial mean of the medium. The integral equation is solved using iterative methods. Preliminary work considering scalar two-dimensional propagation in inhomogeneous media has predicted a flaw signal variance that displays an inverse relation to beam diameter, thus reproducing the qualitative behavior seen in experimental data in titanium. Current work is extending the preliminary two-dimensional scalar result to three-dimensional elasticity, representing propagation in an actual titanium microstructure. Progress on this effort will be reported.
Minimum Variance Approaches to Ultrasound Pixel-Based Beamforming.
Nguyen, Nghia Q; Prager, Richard W
2017-02-01
We analyze the principles underlying minimum variance distortionless response (MVDR) beamforming in order to integrate it into a pixel-based algorithm. There is a challenge posed by the low echo signal-to-noise ratio (eSNR) when calculating beamformer contributions at pixels far away from the beam centreline. Together with the well-known scarcity of samples for covariance matrix estimation, this reduces the beamformer performance and degrades the image quality. To address this challenge, we implement the MVDR algorithm in two different ways. First, we develop the conventional minimum variance pixel-based (MVPB) beamformer that performs the MVDR after the pixel-based superposition step. This involves a combination of methods in the literature, extended over multiple transmits to increase the eSNR. Then we propose the coherent MVPB beamformer, where the MVDR is applied to data within individual transmits. Based on pressure field analysis, we develop new algorithms to improve the data alignment and matrix estimation, and hence overcome the low-eSNR issue. The methods are demonstrated on data acquired with an ultrasound open platform. The results show the coherent MVPB beamformer substantially outperforms the conventional MVPB in a series of experiments, including phantom and in vivo studies. Compared to the unified pixel-based beamformer, the newest delay-and-sum algorithm in [1], the coherent MVPB performs well on regions that conform to the diffuse scattering assumptions on which the minimum variance principles are based. It produces less good results for parts of the image that are dominated by specular reflections.
Regression between earthquake magnitudes having errors with known variances
NASA Astrophysics Data System (ADS)
Pujol, Jose
2016-07-01
Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.
Cosmic variance of the galaxy cluster weak lensing signal
Gruen, D.; Seitz, S.; Becker, M. R.; Friedrich, O.; Mana, A.
2015-04-13
Intrinsic variations of the projected density profiles of clusters of galaxies at fixed mass are a source of uncertainty for cluster weak lensing. We present a semi-analytical model to account for this effect, based on a combination of variations in halo concentration, ellipticity and orientation, and the presence of correlated haloes. We calibrate the parameters of our model at the 10 per cent level to match the empirical cosmic variance of cluster profiles at M_{200m} ≈ 10^{14}…10^{15}h^{–1}M_{⊙}, z = 0.25…0.5 in a cosmological simulation. We show that weak lensing measurements of clusters significantly underestimate mass uncertainties if intrinsic profile variations are ignored, and that our model can be used to provide correct mass likelihoods. Effects on the achievable accuracy of weak lensing cluster mass measurements are particularly strong for the most massive clusters and deep observations (with ≈20 per cent uncertainty from cosmic variance alone at M_{200m} ≈ 10^{15}h^{–1}M_{⊙} and z = 0.25), but significant also under typical ground-based conditions. We show that neglecting intrinsic profile variations leads to biases in the mass-observable relation constrained with weak lensing, both for intrinsic scatter and overall scale (the latter at the 15 per cent level). Furthermore, these biases are in excess of the statistical errors of upcoming surveys and can be avoided if the cosmic variance of cluster profiles is accounted for.
Fringe biasing: A variance reduction technique for optically thick meshes
Smedley-Stevenson, R. P.
2013-07-01
Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)
Monte Carlo calculation of specific absorbed fractions: variance reduction techniques
NASA Astrophysics Data System (ADS)
Díaz-Londoño, G.; García-Pareja, S.; Salvat, F.; Lallena, A. M.
2015-04-01
The purpose of the present work is to calculate specific absorbed fractions using variance reduction techniques and assess the effectiveness of these techniques in improving the efficiency (i.e. reducing the statistical uncertainties) of simulation results in cases where the distance between the source and the target organs is large and/or the target organ is small. The variance reduction techniques of interaction forcing and an ant colony algorithm, which drives the application of splitting and Russian roulette, were applied in Monte Carlo calculations performed with the code penelope for photons with energies from 30 keV to 2 MeV. In the simulations we used a mathematical phantom derived from the well-known MIRD-type adult phantom. The thyroid gland was assumed to be the source organ and urinary bladder, testicles, uterus and ovaries were considered as target organs. Simulations were performed, for each target organ and for photons with different energies, using these variance reduction techniques, all run on the same processor and during a CPU time of 1.5 · 105 s. For energies above 100 keV both interaction forcing and the ant colony method allowed reaching relative uncertainties of the average absorbed dose in the target organs below 4% in all studied cases. When these two techniques were used together, the uncertainty was further reduced, by a factor of 0.5 or less. For photons with energies below 100 keV, an adapted initialization of the ant colony algorithm was required. By using interaction forcing and the ant colony algorithm, realistic values of the specific absorbed fractions can be obtained with relative uncertainties small enough to permit discriminating among simulations performed with different Monte Carlo codes and phantoms. The methodology described in the present work can be employed to calculate specific absorbed fractions for arbitrary arrangements, i.e. energy spectrum of primary radiation, phantom model and source and target organs.
Multi-observable Uncertainty Relations in Product Form of Variances
Qin, Hui-Hui; Fei, Shao-Ming; Li-Jost, Xianqing
2016-01-01
We investigate the product form uncertainty relations of variances for n (n ≥ 3) quantum observables. In particular, tight uncertainty relations satisfied by three observables has been derived, which is shown to be better than the ones derived from the strengthened Heisenberg and the generalized Schrödinger uncertainty relations, and some existing uncertainty relation for three spin-half operators. Uncertainty relation of arbitrary number of observables is also derived. As an example, the uncertainty relation satisfied by the eight Gell-Mann matrices is presented. PMID:27498851
Improved Robustness through Population Variance in Ant Colony Optimization
NASA Astrophysics Data System (ADS)
Matthews, David C.; Sutton, Andrew M.; Hains, Doug; Whitley, L. Darrell
Ant Colony Optimization algorithms are population-based Stochastic Local Search algorithms that mimic the behavior of ants, simulating pheromone trails to search for solutions to combinatorial optimization problems. This paper introduces Population Variance, a novel approach to ACO algorithms that allows parameters to vary across the population over time, leading to solution construction differences that are not strictly stochastic. The increased exploration appears to help the search escape from local optima, significantly improving the robustness of the algorithm with respect to suboptimal parameter settings.
Simulation Study Using a New Type of Sample Variance
NASA Technical Reports Server (NTRS)
Howe, D. A.; Lainson, K. J.
1996-01-01
We evaluate with simulated data a new type of sample variance for the characterization of frequency stability. The new statistic (referred to as TOTALVAR and its square root TOTALDEV) is a better predictor of long-term frequency variations than the present sample Allan deviation. The statistical model uses the assumption that a time series of phase or frequency differences is wrapped (periodic) with overall frequency difference removed. We find that the variability at long averaging times is reduced considerably for the five models of power-law noise commonly encountered with frequency standards and oscillators.
Manufacturing unique glasses in space
NASA Technical Reports Server (NTRS)
Happe, R. P.
1976-01-01
An air suspension melting technique is described for making glasses from substances which to date have been observed only in the crystalline condition. A laminar flow vertical wind tunnel was constructed for suspending oxide melts that were melted using the energy from a carbon dioxide laser beam. By this method it is possible to melt many high-melting-point materials without interaction between the melt and crucible material. In addition, space melting permits cooling to suppress crystal growth. If a sufficient amount of under cooling is accompanied by a sufficient increase in viscosity, crystallization will be avoided entirely and glass will result.
Kushwaha, B P; Mandal, A; Arora, A L; Kumar, R; Kumar, S; Notter, D R
2009-08-01
Estimates of (co)variance components were obtained for weights at birth, weaning and 6, 9 and 12 months of age in Chokla sheep maintained at the Central Sheep and Wool Research Institute, Avikanagar, Rajasthan, India, over a period of 21 years (1980-2000). Records of 2030 lambs descended from 150 rams and 616 ewes were used in the study. Analyses were carried out by restricted maximum likelihood (REML) fitting an animal model and ignoring or including maternal genetic or permanent environmental effects. Six different animal models were fitted for all traits. The best model was chosen after testing the improvement of the log-likelihood values. Direct heritability estimates were inflated substantially for all traits when maternal effects were ignored. Heritability estimates for weight at birth, weaning and 6, 9 and 12 months of age were 0.20, 0.18, 0.16, 0.22 and 0.23, respectively in the best models. Additive maternal and maternal permanent environmental effects were both significant at birth, accounting for 9% and 12% of phenotypic variance, respectively, but the source of maternal effects (additive versus permanent environmental) at later ages could not be clearly identified. The estimated repeatabilities across years of ewe effects on lamb body weights were 0.26, 0.14, 0.12, 0.13, and 0.15 at birth, weaning, 6, 9 and 12 months of age, respectively. These results indicate that modest rates of genetic progress are possible for all weights.
Hydraulic geometry of river cross sections; theory of minimum variance
Williams, Garnett P.
1978-01-01
This study deals with the rates at which mean velocity, mean depth, and water-surface width increase with water discharge at a cross section on an alluvial stream. Such relations often follow power laws, the exponents in which are called hydraulic exponents. The Langbein (1964) minimum-variance theory is examined in regard to its validity and its ability to predict observed hydraulic exponents. The variables used with the theory were velocity, depth, width, bed shear stress, friction factor, slope (energy gradient), and stream power. Slope is often constant, in which case only velocity, depth, width, shear and friction factor need be considered. The theory was tested against a wide range of field data from various geographic areas of the United States. The original theory was intended to produce only the average hydraulic exponents for a group of cross sections in a similar type of geologic or hydraulic environment. The theory does predict these average exponents with a reasonable degree of accuracy. An attempt to forecast the exponents at any selected cross section was moderately successful. Empirical equations are more accurate than the minimum variance, Gauckler-Manning, or Chezy methods. Predictions of the exponent of width are most reliable, the exponent of depth fair, and the exponent of mean velocity poor. (Woodard-USGS)
Worldwide variance in the potential utilization of Gamma Knife radiosurgery.
Hamilton, Travis; Dade Lunsford, L
2016-12-01
OBJECTIVE The role of Gamma Knife radiosurgery (GKRS) has expanded worldwide during the past 3 decades. The authors sought to evaluate whether experienced users vary in their estimate of its potential use. METHODS Sixty-six current Gamma Knife users from 24 countries responded to an electronic survey. They estimated the potential role of GKRS for benign and malignant tumors, vascular malformations, and functional disorders. These estimates were compared with published disease epidemiological statistics and the 2014 use reports provided by the Leksell Gamma Knife Society (16,750 cases). RESULTS Respondents reported no significant variation in the estimated use in many conditions for which GKRS is performed: meningiomas, vestibular schwannomas, and arteriovenous malformations. Significant variance in the estimated use of GKRS was noted for pituitary tumors, craniopharyngiomas, and cavernous malformations. For many current indications, the authors found significant variance in GKRS users based in the Americas, Europe, and Asia. Experts estimated that GKRS was used in only 8.5% of the 196,000 eligible cases in 2014. CONCLUSIONS Although there was a general worldwide consensus regarding many major indications for GKRS, significant variability was noted for several more controversial roles. This expert opinion survey also suggested that GKRS is significantly underutilized for many current diagnoses, especially in the Americas. Future studies should be conducted to investigate health care barriers to GKRS for many patients.
Concentration variance decay during magma mixing: a volcanic chronometer
NASA Astrophysics Data System (ADS)
Perugini, Diego; de Campos, Cristina P.; Petrelli, Maurizio; Dingwell, Donald B.
2015-09-01
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical “mixing to eruption” time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest.
Concentration variance decay during magma mixing: a volcanic chronometer
NASA Astrophysics Data System (ADS)
Perugini, D.; De Campos, C. P.; Petrelli, M.; Dingwell, D. B.
2015-12-01
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical "mixing to eruption" time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest.
VARIANCE ESTIMATION IN DOMAIN DECOMPOSED MONTE CARLO EIGENVALUE CALCULATIONS
Mervin, Brenden T; Maldonado, G. Ivan; Mosher, Scott W; Evans, Thomas M; Wagner, John C
2012-01-01
The number of tallies performed in a given Monte Carlo calculation is limited in most modern Monte Carlo codes by the amount of memory that can be allocated on a single processor. By using domain decomposition, the calculation is now limited by the total amount of memory available on all processors, allowing for significantly more tallies to be performed. However, decomposing the problem geometry introduces significant issues with the way tally statistics are conventionally calculated. In order to deal with the issue of calculating tally variances in domain decomposed environments for the Shift hybrid Monte Carlo code, this paper presents an alternative approach for reactor scenarios in which an assumption is made that once a particle leaves a domain, it does not reenter the domain. Particles that reenter the domain are instead treated as separate independent histories. This assumption introduces a bias that inevitably leads to under-prediction of the calculated variances for tallies within a few mean free paths of the domain boundaries. However, through the use of different decomposition strategies, primarily overlapping domains, the negative effects of such an assumption can be significantly reduced to within reasonable levels.
Concentration variance decay during magma mixing: a volcanic chronometer
Perugini, Diego; De Campos, Cristina P.; Petrelli, Maurizio; Dingwell, Donald B.
2015-01-01
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing – a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical “mixing to eruption” time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest. PMID:26387555
Variance of the Quantum Dwell Time for a Nonrelativistic Particle
NASA Technical Reports Server (NTRS)
Hahne, Gerhard
2012-01-01
Munoz, Seidel, and Muga [Phys. Rev. A 79, 012108 (2009)], following an earlier proposal by Pollak and Miller [Phys. Rev. Lett. 53, 115 (1984)] in the context of a theory of a collinear chemical reaction, showed that suitable moments of a two-flux correlation function could be manipulated to yield expressions for the mean quantum dwell time and mean square quantum dwell time for a structureless particle scattering from a time-independent potential energy field between two parallel lines in a two-dimensional spacetime. The present work proposes a generalization to a charged, nonrelativistic particle scattering from a transient, spatially confined electromagnetic vector potential in four-dimensional spacetime. The geometry of the spacetime domain is that of the slab between a pair of parallel planes, in particular those defined by constant values of the third (z) spatial coordinate. The mean Nth power, N = 1, 2, 3, . . ., of the quantum dwell time in the slab is given by an expression involving an N-flux-correlation function. All these means are shown to be nonnegative. The N = 1 formula reduces to an S-matrix result published previously [G. E. Hahne, J. Phys. A 36, 7149 (2003)]; an explicit formula for N = 2, and of the variance of the dwell time in terms of the S-matrix, is worked out. A formula representing an incommensurability principle between variances of the output-minus-input flux of a pair of dynamical variables (such as the particle s time flux and others) is derived.
Hidden temporal order unveiled in stock market volatility variance
NASA Astrophysics Data System (ADS)
Shapira, Y.; Kenett, D. Y.; Raviv, Ohad; Ben-Jacob, E.
2011-06-01
When analyzed by standard statistical methods, the time series of the daily return of financial indices appear to behave as Markov random series with no apparent temporal order or memory. This empirical result seems to be counter intuitive since investor are influenced by both short and long term past market behaviors. Consequently much effort has been devoted to unveil hidden temporal order in the market dynamics. Here we show that temporal order is hidden in the series of the variance of the stocks volatility. First we show that the correlation between the variances of the daily returns and means of segments of these time series is very large and thus cannot be the output of random series, unless it has some temporal order in it. Next we show that while the temporal order does not show in the series of the daily return, rather in the variation of the corresponding volatility series. More specifically, we found that the behavior of the shuffled time series is equivalent to that of a random time series, while that of the original time series have large deviations from the expected random behavior, which is the result of temporal structure. We found the same generic behavior in 10 different stock markets from 7 different countries. We also present analysis of specially constructed sequences in order to better understand the origin of the observed temporal order in the market sequences. Each sequence was constructed from segments with equal number of elements taken from algebraic distributions of three different slopes.
Stochastic Mixing Model with Power Law Decay of Variance
NASA Technical Reports Server (NTRS)
Fedotov, S.; Ihme, M.; Pitsch, H.
2003-01-01
Here we present a simple stochastic mixing model based on the law of large numbers (LLN). The reason why the LLN is involved in our formulation of the mixing problem is that the random conserved scalar c = c(t,x(t)) appears to behave as a sample mean. It converges to the mean value mu, while the variance sigma(sup 2)(sub c) (t) decays approximately as t(exp -1). Since the variance of the scalar decays faster than a sample mean (typically is greater than unity), we will introduce some non-linear modifications into the corresponding pdf-equation. The main idea is to develop a robust model which is independent from restrictive assumptions about the shape of the pdf. The remainder of this paper is organized as follows. In Section 2 we derive the integral equation from a stochastic difference equation describing the evolution of the pdf of a passive scalar in time. The stochastic difference equation introduces an exchange rate gamma(sub n) which we model in a first step as a deterministic function. In a second step, we generalize gamma(sub n) as a stochastic variable taking fluctuations in the inhomogeneous environment into account. In Section 3 we solve the non-linear integral equation numerically and analyze the influence of the different parameters on the decay rate. The paper finishes with a conclusion.
Discordance of DNA Methylation Variance Between two Accessible Human Tissues
Jiang, Ruiwei; Jones, Meaghan J.; Chen, Edith; Neumann, Sarah M.; Fraser, Hunter B.; Miller, Gregory E.; Kobor, Michael S.
2015-01-01
Population epigenetic studies have been seeking to identify differences in DNA methylation between specific exposures, demographic factors, or diseases in accessible tissues, but relatively little is known about how inter-individual variability differs between these tissues. This study presents an analysis of DNA methylation differences between matched peripheral blood mononuclear cells (PMBCs) and buccal epithelial cells (BECs), the two most accessible tissues for population studies, in 998 promoter-located CpG sites. Specifically we compared probe-wise DNA methylation variance, and how this variance related to demographic factors across the two tissues. PBMCs had overall higher DNA methylation than BECs, and the two tissues tended to differ most at genomic regions of low CpG density. Furthermore, although both tissues showed appreciable probe-wise variability, the specific regions and magnitude of variability differed strongly between tissues. Lastly, through exploratory association analysis, we found indication of differential association of BEC and PBMC with demographic variables. The work presented here offers insight into variability of DNA methylation between individuals and across tissues and helps guide decisions on the suitability of buccal epithelial or peripheral mononuclear cells for the biological questions explored by epigenetic studies in human populations. PMID:25660083
PET image reconstruction: mean, variance, and optimal minimax criterion
NASA Astrophysics Data System (ADS)
Liu, Huafeng; Gao, Fei; Guo, Min; Xue, Liying; Nie, Jing; Shi, Pengcheng
2015-04-01
Given the noise nature of positron emission tomography (PET) measurements, it is critical to know the image quality and reliability as well as expected radioactivity map (mean image) for both qualitative interpretation and quantitative analysis. While existing efforts have often been devoted to providing only the reconstructed mean image, we present a unified framework for joint estimation of the mean and corresponding variance of the radioactivity map based on an efficient optimal min-max criterion. The proposed framework formulates the PET image reconstruction problem to be a transformation from system uncertainties to estimation errors, where the minimax criterion is adopted to minimize the estimation errors with possibly maximized system uncertainties. The estimation errors, in the form of a covariance matrix, express the measurement uncertainties in a complete way. The framework is then optimized by ∞-norm optimization and solved with the corresponding H∞ filter. Unlike conventional statistical reconstruction algorithms, that rely on the statistical modeling methods of the measurement data or noise, the proposed joint estimation stands from the point of view of signal energies and can handle from imperfect statistical assumptions to even no a priori statistical assumptions. The performance and accuracy of reconstructed mean and variance images are validated using Monte Carlo simulations. Experiments on phantom scans with a small animal PET scanner and real patient scans are also conducted for assessment of clinical potential.
Unique interactive projection display screen
Veligdan, J.T.
1997-11-01
Projection systems continue to be the best method to produce large (1 meter and larger) displays. However, in order to produce a large display, considerable volume is typically required. The Polyplanar Optic Display (POD) is a novel type of projection display screen, which for the first time, makes it possible to produce a large projection system that is self-contained and only inches thick. In addition, this display screen is matte black in appearance allowing it to be used in high ambient light conditions. This screen is also interactive and can be remotely controlled via an infrared optical pointer resulting in mouse-like control of the display. Furthermore, this display need not be flat since it can be made curved to wrap around a viewer as well as being flexible.
Vrshek-Schallhorn, Suzanne; Stroud, Catherine B.; Mineka, Susan; Hammen, Constance; Zinbarg, Richard; Wolitzky-Taylor, Kate; Craske, Michelle G.
2016-01-01
Few studies comprehensively evaluate which types of life stress are most strongly associated with depressive episode onsets, over and above other forms of stress, and comparisons between acute and chronic stress are particularly lacking. Past research implicates major (moderate to severe) stressful life events (SLEs), and to a lesser extent, interpersonal forms of stress; research conflicts on whether dependent or independent SLEs are more potent, but theory favors dependent SLEs. The present study used five years of annual diagnostic and life stress interviews of chronic stress and SLEs from two separate samples (Sample 1 N = 432; Sample 2 N = 146) transitioning into emerging adulthood; one sample also collected early adversity interviews. Multivariate analyses simultaneously examined multiple forms of life stress to test hypotheses that all major SLEs, then particularly interpersonal forms of stress, and then dependent SLEs would contribute unique variance to major depressive episode (MDE) onsets. Person-month survival analysis consistently implicated chronic interpersonal stress and major interpersonal SLEs as statistically unique predictors of risk for MDE onset. In addition, follow-up analyses demonstrated temporal precedence for chronic stress; tested differences by gender; showed that recent chronic stress mediates the relationship between adolescent adversity and later MDE onsets; and revealed interactions of several forms of stress with socioeconomic status (SES). Specifically, as SES declined, there was an increasing role for non-interpersonal chronic stress and non-interpersonal major SLEs, coupled with a decreasing role for interpersonal chronic stress. Implications for future etiological research were discussed. PMID:26301973
Gap-filling methods to impute eddy covariance flux data by preserving variance.
NASA Astrophysics Data System (ADS)
Kunwor, S.; Staudhammer, C. L.; Starr, G.; Loescher, H. W.
2015-12-01
To represent carbon dynamics, in terms of exchange of CO2 between the terrestrial ecosystem and the atmosphere, eddy covariance (EC) data has been collected using eddy flux towers from various sites across globe for more than two decades. However, measurements from EC data are missing for various reasons: precipitation, routine maintenance, or lack of vertical turbulence. In order to have estimates of net ecosystem exchange of carbon dioxide (NEE) with high precision and accuracy, robust gap-filling methods to impute missing data are required. While the methods used so far have provided robust estimates of the mean value of NEE, little attention has been paid to preserving the variance structures embodied by the flux data. Preserving the variance of these data will provide unbiased and precise estimates of NEE over time, which mimic natural fluctuations. We used a non-linear regression approach with moving windows of different lengths (15, 30, and 60-days) to estimate non-linear regression parameters for one year of flux data from a long-leaf pine site at the Joseph Jones Ecological Research Center. We used as our base the Michaelis-Menten and Van't Hoff functions. We assessed the potential physiological drivers of these parameters with linear models using micrometeorological predictors. We then used a parameter prediction approach to refine the non-linear gap-filling equations based on micrometeorological conditions. This provides us an opportunity to incorporate additional variables, such as vapor pressure deficit (VPD) and volumetric water content (VWC) into the equations. Our preliminary results indicate that improvements in gap-filling can be gained with a 30-day moving window with additional micrometeorological predictors (as indicated by lower root mean square error (RMSE) of the predicted values of NEE). Our next steps are to use these parameter predictions from moving windows to gap-fill the data with and without incorporation of potential driver variables
Is the tautochrone curve unique?
NASA Astrophysics Data System (ADS)
Terra, Pedro; de Melo e Souza, Reinaldo; Farina, C.
2016-12-01
We show that there are an infinite number of tautochrone curves in addition to the cycloid solution first obtained by Christiaan Huygens in 1658. We begin by reviewing the inverse problem of finding the possible potential energy functions that lead to periodic motions of a particle whose period is a given function of its mechanical energy. There are infinitely many such solutions, called "sheared" potentials. As an interesting example, we show that a Pöschl-Teller potential and the one-dimensional Morse potentials are sheared relative to one another for negative energies, clarifying why they share the same oscillation periods for their bounded solutions. We then consider periodic motions of a particle sliding without friction over a track around its minimum under the influence of a constant gravitational field. After a brief historical survey of the tautochrone problem we show that, given the oscillation period, there is an infinity of tracks that lead to the same period. As a bonus, we show that there are infinitely many tautochrones.
ERIC Educational Resources Information Center
Liu, Duo; Chen, Xi; Chung, Kevin K. H.
2015-01-01
This study examined the relation between the performance in a visual search task and reading ability in 92 third-grade Hong Kong Chinese children. The visual search task, which is considered a measure of visual-spatial attention, accounted for unique variance in Chinese character reading after controlling for age, nonverbal intelligence,…
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Sig-NganLam, Nina; Quattrochi, Dale A.
2004-01-01
The accuracy of traditional multispectral maximum-likelihood image classification is limited by the skewed statistical distributions of reflectances from the complex heterogenous mixture of land cover types in urban areas. This work examines the utility of local variance, fractal dimension and Moran's I index of spatial autocorrelation in segmenting multispectral satellite imagery. Tools available in the Image Characterization and Modeling System (ICAMS) were used to analyze Landsat 7 imagery of Atlanta, Georgia. Although segmentation of panchromatic images is possible using indicators of spatial complexity, different land covers often yield similar values of these indices. Better results are obtained when a surface of local fractal dimension or spatial autocorrelation is combined as an additional layer in a supervised maximum-likelihood multispectral classification. The addition of fractal dimension measures is particularly effective at resolving land cover classes within urbanized areas, as compared to per-pixel spectral classification techniques.
Rodríguez-Clark, K M
2004-07-01
Understanding the changes in genetic variance which may occur as populations move from nature into captivity has been considered important when populations in captivity are used as models of wild ones. However, the inherent significance of these changes has not previously been appreciated in a conservation context: are the methods aimed at founding captive populations with gene diversity representative of natural populations likely also to capture representative quantitative genetic variation? Here, I investigate changes in heritability and a less traditional measure, evolvability, between nature and captivity for the large milkweed bug, Oncopeltus fasciatus, to address this question. Founders were collected from a 100-km transect across the north-eastern US, and five traits (wing colour, pronotum colour, wing length, early fecundity and later fecundity) were recorded for founders and for their offspring during two generations in captivity. Analyses reveal significant heritable variation for some life history and morphological traits in both environments, with comparable absolute levels of evolvability across all traits (0-30%). Randomization tests show that while changes in heritability and total phenotypic variance were highly variable, additive genetic variance and evolvability remained stable across the environmental transition in the three morphological traits (changing 1-2% or less), while they declined significantly in the two life-history traits (5-8%). Although it is unclear whether the declines were due to selection or gene-by-environment interactions (or both), such declines do not appear inevitable: captive populations with small numbers of founders may contain substantial amounts of the evolvability found in nature, at least for some traits.
Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction.
Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan
2017-02-27
Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 10(16) electrons/m²) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed
Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction
Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan
2017-01-01
Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed
Chloroembryos: a unique photosynthesis system.
Puthur, Jos T; Shackira, A M; Saradhi, P Pardha; Bartels, Dorothea
2013-09-01
The embryos of some angiosperm taxa contain chlorophyll and this chlorophyllous stage is persisting until the embryo matures (further referred as chloroembryos). Besides being chlorophyllous, these embryos seem to have the ability to photosynthesize. This suggests that the chlorophyllous state of the embryo has an important role in seed development. The photosynthesis of chloroembryos is highly shade adaptive in nature as it is embedded within the supporting tissues (several layers of pod wall, seed coat and endosperm). Moreover, these chloroembryos are developing in a highly osmotic environment, and contain various components of the photosynthetic machinery. Detailed studies were performed in these chloroembryos in order to elucidate the structure of the chloroplasts, pigment composition, the photochemical activities, the rate of carbon assimilation and also the shade adaptive features. It has been shown that the respired CO2 within these chloroembryos is recycled by the efficient photosynthetic components of the chloroembryos and thus potentially influences the seed's carbon economy. Thus, the major role of embryonic photosynthesis is to produce both energy-rich molecules and oxygen, of which the former can be directly used for biosynthesis. During embryogenesis oxygen production is especially important, in a situation wherein the oxygen is limited within the enclosed seed. As these chloroembryos grow in an environment of a sugar rich endosperm, it requires some adaptive mechanisms in this high osmotic environment. The additional polypeptides found in the thylakoids of chloroembryo chloroplasts in comparison to the thylakoids of leaf chloroplast have been suggested to have a role in protecting the photosynthetic components in the chloroembryos in an environment of high osmotic strength. An attempt to understand osmotic stress tolerance existing in these chloroembryos may lead to a better understanding of tolerance of photosynthesis to osmotic stress.
29 CFR 4204.11 - Variance of the bond/escrow and sale-contract requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... CORPORATION WITHDRAWAL LIABILITY FOR MULTIEMPLOYER PLANS VARIANCES FOR SALE OF ASSETS Variance of the... chapter to determine the date that an issuance under this subpart was provided. (Approved by the Office...
Profile Uniqueness in Student Ratings of Instruction.
ERIC Educational Resources Information Center
Weber, Larry J.; Frary, Robert B.
An approach to partitioning the variance in student ratings not previously reported in the literature is described. The new approach provides an alternative basis for interpreting faculty evaluations that overcome objections to current practices. Student responses on evaluation forms were cluster-analyzed to establish homogeneous subgroups of…
Variance of indoor radon concentration: Major influencing factors.
Yarmoshenko, I; Vasilyev, A; Malinovsky, G; Bossew, P; Žunić, Z S; Onischenko, A; Zhukovsky, M
2016-01-15
Variance of radon concentration in dwelling atmosphere is analysed with regard to geogenic and anthropogenic influencing factors. Analysis includes review of 81 national and regional indoor radon surveys with varying sampling pattern, sample size and duration of measurements and detailed consideration of two regional surveys (Sverdlovsk oblast, Russia and Niška Banja, Serbia). The analysis of the geometric standard deviation revealed that main factors influencing the dispersion of indoor radon concentration over the territory are as follows: area of territory, sample size, characteristics of measurements technique, the radon geogenic potential, building construction characteristics and living habits. As shown for Sverdlovsk oblast and Niška Banja town the dispersion as quantified by GSD is reduced by restricting to certain levels of control factors. Application of the developed approach to characterization of the world population radon exposure is discussed.
Sources of variance of downwelling irradiance in water.
Gege, Peter; Pinnel, Nicole
2011-05-20
The downwelling irradiance in water is highly variable due to the focusing and defocusing of sunlight and skylight by the wave-modulated water surface. While the time scales and intensity variations caused by wave focusing are well studied, little is known about the induced spectral variability. Also, the impact of variations of sensor depth and inclination during the measurement on spectral irradiance has not been studied much. We have developed a model that relates the variance of spectral irradiance to the relevant parameters of the environmental and experimental conditions. A dataset from three German lakes was used to validate the model and to study the importance of each effect as a function of depth for the range of 0 to 5 m.
Analysis of variance of an underdetermined geodetic displacement problem
Darby, D.
1982-06-01
It has been suggested recently that point displacements in a free geodetic network traversing a strike-slip fault may be estimated from repeated surveys by minimizing only those displacement components normal to the strike. It is desirable to justify this procedure. We construct, from estimable quantities, a deformation parameter which is an F-statistic of the type occurring in the analysis of variance of linear models not of full rank. A test of its significance provides the criterion to justify the displacement solution. It is also interesting to study its behaviour as one varies the supposed strike of the fault. Justification of a displacement solution using data from a strike-slip fault is found, but not for data from a rift valley. The technique can be generalized to more complex patterns of deformation such as those expected near the end-zone of a fault in a dislocation model.
On computations of variance, covariance and correlation for interval data
NASA Astrophysics Data System (ADS)
Kishida, Masako
2017-02-01
In many practical situations, the data on which statistical analysis is to be performed is only known with interval uncertainty. Different combinations of values from the interval data usually lead to different values of variance, covariance, and correlation. Hence, it is desirable to compute the endpoints of possible values of these statistics. This problem is, however, NP-hard in general. This paper shows that the problem of computing the endpoints of possible values of these statistics can be rewritten as the problem of computing skewed structured singular values ν, for which there exist feasible (polynomial-time) algorithms that compute reasonably tight bounds in most practical cases. This allows one to find tight intervals of the aforementioned statistics for interval data.
Variance estimation for the Federal Waterfowl Harvest Surveys
Geissler, P.H.
1988-01-01
The Federal Waterfowl Harvest Surveys provide estimates of waterfowl harvest by species for flyways and states, harvests of most other migratory game bird species (by waterfowl hunters), crippling losses for ducks, geese, and coots, days hunted, and bag per hunter. The Waterfowl Hunter Questionnaire Survey separately estimates the harvest of ducks and geese using cluster samples of hunters who buy duck stamps at sample post offices. The Waterfowl Parts Collection estimates species, age, and sex ratios from parts solicited from successful hunters who responded to the Waterfowl Hunter Questionnaire Survey in previous years. These ratios are used to partition the duck and goose harvest into species, age, and sex specific harvest estimates. Annual estimates are correlated because successful hunters who respond to the Questionnaire Survey in one year may be asked to contribute to the Parts Collection for the next three years. Bootstrap variance estimates are used because covariances among years are difficult to estimate.
Correct use of repeated measures analysis of variance.
Park, Eunsik; Cho, Meehye; Ki, Chang-Seok
2009-02-01
In biomedical research, researchers frequently use statistical procedures such as the t-test, standard analysis of variance (ANOVA), or the repeated measures ANOVA to compare means between the groups of interest. There are frequently some misuses in applying these procedures since the conditions of the experiments or statistical assumptions necessary to apply these procedures are not fully taken into consideration. In this paper, we demonstrate the correct use of repeated measures ANOVA to prevent or minimize ethical or scientific problems due to its misuse. We also describe the appropriate use of multiple comparison tests for follow-up analysis in repeated measures ANOVA. Finally, we demonstrate the use of repeated measures ANOVA by using real data and the statistical software package SPSS (SPSS Inc., USA).
Objective Bayesian Comparison of Constrained Analysis of Variance Models.
Consonni, Guido; Paroli, Roberta
2016-10-04
In the social sciences we are often interested in comparing models specified by parametric equality or inequality constraints. For instance, when examining three group means [Formula: see text] through an analysis of variance (ANOVA), a model may specify that [Formula: see text], while another one may state that [Formula: see text], and finally a third model may instead suggest that all means are unrestricted. This is a challenging problem, because it involves a combination of nonnested models, as well as nested models having the same dimension. We adopt an objective Bayesian approach, requiring no prior specification from the user, and derive the posterior probability of each model under consideration. Our method is based on the intrinsic prior methodology, suitably modified to accommodate equality and inequality constraints. Focussing on normal ANOVA models, a comparative assessment is carried out through simulation studies. We also present an application to real data collected in a psychological experiment.
INTERPRETING MAGNETIC VARIANCE ANISOTROPY MEASUREMENTS IN THE SOLAR WIND
TenBarge, J. M.; Klein, K. G.; Howes, G. G.; Podesta, J. J.
2012-07-10
The magnetic variance anisotropy (A{sub m}) of the solar wind has been used widely as a method to identify the nature of solar wind turbulent fluctuations; however, a thorough discussion of the meaning and interpretation of the A{sub m} has not appeared in the literature. This paper explores the implications and limitations of using the A{sub m} as a method for constraining the solar wind fluctuation mode composition and presents a more informative method for interpreting spacecraft data. The paper also compares predictions of the A{sub m} from linear theory to nonlinear turbulence simulations and solar wind measurements. In both cases, linear theory compares well and suggests that the solar wind for the interval studied is dominantly Alfvenic in the inertial and dissipation ranges to scales of k{rho}{sub i} {approx_equal} 5.
Estimation of measurement variance in the context of environment statistics
NASA Astrophysics Data System (ADS)
Maiti, Pulakesh
2015-02-01
The object of environment statistics is for providing information on the environment, on its most important changes over time, across locations and identifying the main factors that influence them. Ultimately environment statistics would be required to produce higher quality statistical information. For this timely, reliable and comparable data are needed. Lack of proper and uniform definitions, unambiguous classifications pose serious problems to procure qualitative data. These cause measurement errors. We consider the problem of estimating measurement variance so that some measures may be adopted to improve upon the quality of data on environmental goods and services and on value statement in economic terms. The measurement technique considered here is that of employing personal interviewers and the sampling considered here is that of two-stage sampling.
A Posteriori Correction of Forecast and Observation Error Variances
NASA Technical Reports Server (NTRS)
Rukhovets, Leonid
2005-01-01
Proposed method of total observation and forecast error variance correction is based on the assumption about normal distribution of "observed-minus-forecast" residuals (O-F), where O is an observed value and F is usually a short-term model forecast. This assumption can be accepted for several types of observations (except humidity) which are not grossly in error. Degree of nearness to normal distribution can be estimated by the symmetry or skewness (luck of symmetry) a(sub 3) = mu(sub 3)/sigma(sup 3) and kurtosis a(sub 4) = mu(sub 4)/sigma(sup 4) - 3 Here mu(sub i) = i-order moment, sigma is a standard deviation. It is well known that for normal distribution a(sub 3) = a(sub 4) = 0.
A method for the microlensed flux variance of QSOs
NASA Astrophysics Data System (ADS)
Goodman, Jeremy; Sun, Ai-Lei
2014-06-01
A fast and practical method is described for calculating the microlensed flux variance of an arbitrary source by uncorrelated stars. The required inputs are the mean convergence and shear due to the smoothed potential of the lensing galaxy, the stellar mass function, and the absolute square of the Fourier transform of the surface brightness in the source plane. The mathematical approach follows previous authors but has been generalized, streamlined, and implemented in publicly available code. Examples of its application are given for Dexter and Agol's inhomogeneous-disc models as well as the usual Gaussian sources. Since the quantity calculated is a second moment of the magnification, it is only logarithmically sensitive to the sizes of very compact sources. However, for the inferred sizes of actual quasi-stellar objects (QSOs), it has some discriminatory power and may lend itself to simple statistical tests. At the very least, it should be useful for testing the convergence of microlensing simulations.
The use of analysis of variance procedures in biological studies
Williams, B.K.
1987-01-01
The analysis of variance (ANOVA) is widely used in biological studies, yet there remains considerable confusion among researchers about the interpretation of hypotheses being tested. Ambiguities arise when statistical designs are unbalanced, and in particular when not all combinations of design factors are represented in the data. This paper clarifies the relationship among hypothesis testing, statistical modelling and computing procedures in ANOVA for unbalanced data. A simple two-factor fixed effects design is used to illustrate three common parametrizations for ANOVA models, and some associations among these parametrizations are developed. Biologically meaningful hypotheses for main effects and interactions are given in terms of each parametrization, and procedures for testing the hypotheses are described. The standard statistical computing procedures in ANOVA are given along with their corresponding hypotheses. Throughout the development unbalanced designs are assumed and attention is given to problems that arise with missing cells.
From means and variances to persons and patterns
Grice, James W.
2015-01-01
A novel approach for conceptualizing and analyzing data from psychological studies is presented and discussed. This approach is centered on model building in an effort to explicate the structures and processes believed to generate a set of observations. These models therefore go beyond the variable-based, path models in use today which are limiting with regard to the types of inferences psychologists can draw from their research. In terms of analysis, the newer approach replaces traditional aggregate statistics such as means, variances, and covariances with methods of pattern detection and analysis. While these methods are person-centered and do not require parametric assumptions, they are both demanding and rigorous. They also provide psychologists with the information needed to draw the primary inference they often wish to make from their research; namely, the inference to best explanation. PMID:26257672
Hodological Resonance, Hodological Variance, Psychosis, and Schizophrenia: A Hypothetical Model
Birkett, Paul Brian Lawrie
2011-01-01
Schizophrenia is a disorder with a large number of clinical, neurobiological, and cognitive manifestations, none of which is invariably present. However it appears to be a single nosological entity. This article considers the likely characteristics of a pathology capable of such diverse consequences. It is argued that both deficit and psychotic symptoms can be manifestations of a single pathology. A general model of psychosis is proposed in which the informational sensitivity or responsivity of a network (“hodological resonance”) becomes so high that it activates spontaneously, to produce a hallucination, if it is in sensory cortex, or another psychotic symptom if it is elsewhere. It is argued that this can come about because of high levels of modulation such as those assumed present in affective psychosis, or because of high levels of baseline resonance, such as those expected in deafferentation syndromes associated with hallucinations, for example, Charles Bonnet. It is further proposed that schizophrenia results from a process (probably neurodevelopmental) causing widespread increases of variance in baseline resonance; consequently some networks possess high baseline resonance and become susceptible to spontaneous activation. Deficit symptoms might result from the presence of networks with increased activation thresholds. This hodological variance model is explored in terms of schizo-affective disorder, transient psychotic symptoms, diathesis-stress models, mechanisms of antipsychotic pharmacotherapy and persistence of genes predisposing to schizophrenia. Predictions and implications of the model are discussed. In particular it suggests a need for more research into psychotic states and for more single case-based studies in schizophrenia. PMID:21811475
Variance of the quantum dwell time for a nonrelativistic particle
Hahne, G. E.
2013-01-15
Munoz, Seidel, and Muga [Phys. Rev. A 79, 012108 (2009)], following an earlier proposal by Pollak and Miller [Phys. Rev. Lett. 53, 115 (1984)] in the context of a theory of a collinear chemical reaction, showed that suitable moments of a two-flux correlation function could be manipulated to yield expressions for the mean quantum dwell time and mean square quantum dwell time for a structureless particle scattering from a time-independent potential energy field between two parallel lines in a two-dimensional spacetime. The present work proposes a generalization to a charged, nonrelativistic particle scattering from a transient, spatially confined electromagnetic vector potential in four-dimensional spacetime. The geometry of the spacetime domain is that of the slab between a pair of parallel planes, in particular, those defined by constant values of the third (z) spatial coordinate. The mean Nth power, N= 1, 2, 3, Horizontal-Ellipsis , of the quantum dwell time in the slab is given by an expression involving an N-flux-correlation function. All these means are shown to be nonnegative. The N= 1 formula reduces to an S-matrix result published previously [G. E. Hahne, J. Phys. A 36, 7149 (2003)]; an explicit formula for N= 2, and of the variance of the dwell time in terms of the S-matrix, is worked out. A formula representing an incommensurability principle between variances of the output-minus-input flux of a pair of dynamical variables (such as the particle's time flux and others) is derived.
A variance-decomposition approach to investigating multiscale habitat associations
Lawler, J.J.; Edwards, T.C.
2006-01-01
The recognition of the importance of spatial scale in ecology has led many researchers to take multiscale approaches to studying habitat associations. However, few of the studies that investigate habitat associations at multiple spatial scales have considered the potential effects of cross-scale correlations in measured habitat variables. When cross-scale correlations in such studies are strong, conclusions drawn about the relative strength of habitat associations at different spatial scales may be inaccurate. Here we adapt and demonstrate an analytical technique based on variance decomposition for quantifying the influence of cross-scale correlations on multiscale habitat associations. We used the technique to quantify the variation in nest-site locations of Red-naped Sapsuckers (Sphyrapicus nuchalis) and Northern Flickers (Colaptes auratus) associated with habitat descriptors at three spatial scales. We demonstrate how the method can be used to identify components of variation that are associated only with factors at a single spatial scale as well as shared components of variation that represent cross-scale correlations. Despite the fact that no explanatory variables in our models were highly correlated (r < 0.60), we found that shared components of variation reflecting cross-scale correlations accounted for roughly half of the deviance explained by the models. These results highlight the importance of both conducting habitat analyses at multiple spatial scales and of quantifying the effects of cross-scale correlations in such analyses. Given the limits of conventional analytical techniques, we recommend alternative methods, such as the variance-decomposition technique demonstrated here, for analyzing habitat associations at multiple spatial scales. ?? The Cooper Ornithological Society 2006.
Water vapor variance measurements using a Raman lidar
NASA Technical Reports Server (NTRS)
Evans, K.; Melfi, S. H.; Ferrare, R.; Whiteman, D.
1992-01-01
Because of the importance of atmospheric water vapor variance, we have analyzed data from the NASA/Goddard Raman lidar to obtain temporal scales of water vapor mixing ratio as a function of altitude over observation periods extending to 12 hours. The ground-based lidar measures water vapor mixing ration from near the earth's surface to an altitude of 9-10 km. Moisture profiles are acquired once every minute with 75 m vertical resolution. Data at each 75 meter altitude level can be displayed as a function of time from the beginning to the end of an observation period. These time sequences have been spectrally analyzed using a fast Fourier transform technique. An example of such a temporal spectrum obtained between 00:22 and 10:29 UT on December 6, 1991 is shown in the figure. The curve shown on the figure represents the spectral average of data from 11 height levels centered on an altitude of 1 km (1 plus or minus .375 km). The spectra shows a decrease in energy density with frequency which generally follows a -5/3 power law over the spectral interval 3x10 (exp -5) to 4x10 (exp -3) Hz. The flattening of the spectrum for frequencies greater than 6x10 (exp -3) Hz is most likely a measure of instrumental noise. Spectra like that shown in the figure are calculated for other altitudes and show changes in spectral features with height. Spectral analysis versus height have been performed for several observation periods which demonstrate changes in water vapor mixing ratio spectral character from one observation period to the next. The combination of these temporal spectra with independent measurements of winds aloft provide an opportunity to infer spatial scales of moisture variance.
Fitts, Douglas A
2010-11-01
The variable-criteria sequential stopping rule (SSR) is a method for conducting planned experiments in stages after the addition of new subjects until the experiment is stopped because the p value is less than or equal to a lower criterion and the null hypothesis has been rejected, the p value is above an upper criterion, or a maximum sample size has been reached. Alpha is controlled at the expected level. The table of stopping criteria has been validated for a t test or ANOVA with four groups. New simulations in this article demonstrate that the SSR can be used with unequal sample sizes or heterogeneous variances in a t test. As with the usual t test, the use of a separate-variance term instead of a pooled-variance term prevents an inflation of alpha with heterogeneous variances. Simulations validate the original table of criteria for up to 20 groups without a drift of alpha. When used with a multigroup ANOVA, a planned contrast can be substituted for the global F as the focus for the stopping rule. The SSR is recommended when significance tests are appropriate and when the null hypothesis can be tested in stages. Because of its efficiency, the SSR should be used instead of the usual approach to the t test or ANOVA when subjects are expensive, rare, or limited by ethical considerations such as pain or distress.
Mineralogy of Tagish Lake, a Unique Type 2 Carbonaceous Chondrite
NASA Technical Reports Server (NTRS)
Gounelle, M.; Zolensky, M. E.; Tonui, E.; Mikouchi, T.
2001-01-01
We have identified in Tagish Lake an abondant carbonate-poor lithology and a less common carbonate-rich lithology. Tagish Lake shows similarities and differences with CMs and CI1s. It is a unique carbonaceous chondrite recording specific aqueous alteration conditions. Additional information is contained in the original extended abstract.
FRESIP project observations of cataclysmic variables: A unique opportunity
NASA Technical Reports Server (NTRS)
Howell, Steve B.
1994-01-01
FRESIP Project observations of cataclysmic variables would provide unique data sets. In the study of known cataclysmic variables they would provide extended, well sampled temporal photometric information and in addition, they would provide a large area deep survey; obtaining a complete magnitude limited sample of the galaxy in the volume cone defined by the FRESIP field of view.
What Is Valuable and Unique about the Educational Psychologist?
ERIC Educational Resources Information Center
Ashton, Rebecca; Roberts, Elizabeth
2006-01-01
This paper describes a small-scale piece of research identifying which aspects of the EP role are considered valuable by SENCos and by EPs themselves. In addition, both groups were asked to identify whether they felt these aspects were uniquely offered by EPs or whether other professionals offered similar or identical services. The differences…
On pathwise uniqueness of stochastic evolution equations in Hilbert spaces
NASA Astrophysics Data System (ADS)
Xie, Bin
2008-08-01
The pathwise uniqueness of stochastic evolution equations driven by Q-Wiener processes is mainly investigated in this article. We focus on the case that the modulus of the continuity of the coefficients is not controlled by a linear function. Additionally, we show that the corresponding diffusion process is Feller.
40 CFR 142.61 - Variances from the maximum contaminant level for fluoride.
Code of Federal Regulations, 2010 CFR
2010-07-01
... responsibility (primacy state) that issues variances shall require a community water system to install and/or use... (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION... application by a system for a variance, the Administrator or primacy state that issues variances...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-06
... Variances for Hazardous Selenium Bearing Waste AGENCY: Environmental Protection Agency (EPA). ACTION: Direct...-Bearing Waste II. Basis for This Determination III. Development of This Variance A. U.S. Ecology Nevada... from 0.16 mg/L to 5.7 mg/L TCLP. C. Site-Specific Treatment Variance for Selenium-Bearing Waste On...
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2011-01-01
Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…
Minimum variance optimal rate allocation for multiplexed H.264/AVC bitstreams.
Tagliasacchi, Marco; Valenzise, Giuseppe; Tubaro, Stefano
2008-07-01
Consider the problem of transmitting multiple video streams to fulfill a constant bandwidth constraint. The available bit budget needs to be distributed across the sequences in order to meet some optimality criteria. For example, one might want to minimize the average distortion or, alternatively, minimize the distortion variance, in order to keep almost constant quality among the encoded sequences. By working in the rho-domain, we propose a low-delay rate allocation scheme that, at each time instant, provides a closed form solution for either the aforementioned problems. We show that minimizing the distortion variance instead of the average distortion leads, for each of the multiplexed sequences, to a coding penalty less than 0.5 dB, in terms of average PSNR. In addition, our analysis provides an explicit relationship between model parameters and this loss. In order to smooth the distortion also along time, we accommodate a shared encoder buffer to compensate for rate fluctuations. Although the proposed scheme is general, and it can be adopted for any video and image coding standard, we provide experimental evidence by transcoding bitstreams encoded using the state-of-the-art H.264/AVC standard. The results of our simulations reveal that is it possible to achieve distortion smoothing both in time and across the sequences, without sacrificing coding efficiency.
NASA Astrophysics Data System (ADS)
Maginnis, P. A.; West, M.; Dullerud, G. E.
2016-10-01
We propose an algorithm to accelerate Monte Carlo simulation for a broad class of stochastic processes. Specifically, the class of countable-state, discrete-time Markov chains driven by additive Poisson noise, or lattice discrete-time Markov chains. In particular, this class includes simulation of reaction networks via the tau-leaping algorithm. To produce the speedup, we simulate pairs of fair-draw trajectories that are negatively correlated. Thus, when averaged, these paths produce an unbiased Monte Carlo estimator that has reduced variance and, therefore, reduced error. Numerical results for three example systems included in this work demonstrate two to four orders of magnitude reduction of mean-square error. The numerical examples were chosen to illustrate different application areas and levels of system complexity. The areas are: gene expression (affine state-dependent rates), aerosol particle coagulation with emission and human immunodeficiency virus infection (both with nonlinear state-dependent rates). Our algorithm views the system dynamics as a "black-box", i.e., we only require control of pseudorandom number generator inputs. As a result, typical codes can be retrofitted with our algorithm using only minor changes. We prove several analytical results. Among these, we characterize the relationship of covariances between paths in the general nonlinear state-dependent intensity rates case, and we prove variance reduction of mean estimators in the special case of affine intensity rates.
Age at menarche as a fitness trait: nonadditive genetic variance detected in a large twin sample.
Treloar, S A; Martin, N G
1990-01-01
The etiological role of genotype and environment in recalled age at menarche was examined using an unselected sample of 1,177 MZ and 711 DZ twin pairs aged 18 years and older. The correlation for onset of menarche between MZ twins was .65 +/- .03, and that for DZ pairs was .18 +/- .04, although these differed somewhat between four birth cohorts. Environmental factors were more important in the older cohorts (perhaps because of less reliable recall). Total genotypic variance (additive plus nonadditive) ranged from 61% in the oldest cohort to 68% in the youngest cohort. In the oldest birth cohort (born before 1939), there was evidence of greater influence of environmental factors on age at menarche in the second-born twin, although there was no other evidence in the data that birth trauma affected timing. The greater part of the genetic variance was nonadditive (dominance or epistasis), and this is typical of a fitness trait. It appears that genetic nonadditivity is in the decreasing direction, and this is consistent with selection for early menarche during human evolution. Breakdown of inbreeding depression as a possible explanation for the secular decline in age at menarche is discussed. PMID:2349942
Nonlocal image restoration with bilateral variance estimation: a low-rank approach.
Dong, Weisheng; Shi, Guangming; Li, Xin
2013-02-01
Simultaneous sparse coding (SSC) or nonlocal image representation has shown great potential in various low-level vision tasks, leading to several state-of-the-art image restoration techniques, including BM3D and LSSC. However, it still lacks a physically plausible explanation about why SSC is a better model than conventional sparse coding for the class of natural images. Meanwhile, the problem of sparsity optimization, especially when tangled with dictionary learning, is computationally difficult to solve. In this paper, we take a low-rank approach toward SSC and provide a conceptually simple interpretation from a bilateral variance estimation perspective, namely that singular-value decomposition of similar packed patches can be viewed as pooling both local and nonlocal information for estimating signal variances. Such perspective inspires us to develop a new class of image restoration algorithms called spatially adaptive iterative singular-value thresholding (SAIST). For noise data, SAIST generalizes the celebrated BayesShrink from local to nonlocal models; for incomplete data, SAIST extends previous deterministic annealing-based solution to sparsity optimization through incorporating the idea of dictionary learning. In addition to conceptual simplicity and computational efficiency, SAIST has achieved highly competent (often better) objective performance compared to several state-of-the-art methods in image denoising and completion experiments. Our subjective quality results compare favorably with those obtained by existing techniques, especially at high noise levels and with a large amount of missing data.
Waste Isolation Pilot Plant no-migration variance petition. Executive summary
Not Available
1990-12-31
Section 3004 of RCRA allows EPA to grant a variance from the land disposal restrictions when a demonstration can be made that, to a reasonable degree of certainty, there will be no migration of hazardous constituents from the disposal unit for as long as the waste remains hazardous. Specific requirements for making this demonstration are found in 40 CFR 268.6, and EPA has published a draft guidance document to assist petitioners in preparing a variance request. Throughout the course of preparing this petition, technical staff from DOE, EPA, and their contractors have met frequently to discuss and attempt to resolve issues specific to radioactive mixed waste and the WIPP facility. The DOE believes it meets or exceeds all requirements set forth for making a successful ``no-migration`` demonstration. The petition presents information under five general headings: (1) waste information; (2) site characterization; (3) facility information; (4) assessment of environmental impacts, including the results of waste mobility modeling; and (5) analysis of uncertainties. Additional background and supporting documentation is contained in the 15 appendices to the petition, as well as in an extensive addendum published in October 1989.
Variance reduction for Fokker–Planck based particle Monte Carlo schemes
Gorji, M. Hossein Andric, Nemanja; Jenny, Patrick
2015-08-15
Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied. Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.
Quantizing and characterizing the variance of hand postures in a novel transformation task.
Vinjamuri, Ramana; Sun, Mingui; Weber, Douglas; Wang, Wei; Crammond, Donald; Mao, Zhi-Hong
2009-01-01
This paper presents a numerical approach using principal component analysis (PCA) to quantize and characterize the variance of hand postures in a novel posture transformation task. Five subjects were tested in two tasks in which a cursor can be moved by varying the hand posture. This was accomplished by weighted linear combination of 14 sensors of a data glove. The first task was to move a cursor on computer screen in one dimension horizontally, by posing various hand postures. To increase the complexity of control, in the second task, subjects were asked to move a cursor on computer screen in two dimensions. Joint angles were measured during the experiment by the data glove. In both tasks subjects participated in multiple trials until they achieved smooth cursor movement trajectories. PCA was performed over the postures obtained during the multiple trials of the two tasks. Across the trials, in both the tasks a gradual decrease in the number of principal components was observed. This implies that the variance in the postures decreases with learning. Additionally this might indicate that through learning, subjects adapted postural synergies (or eigen postures) in this novel geometrical environment. Postural synergies when visualized revealed task specific synergies.
Cantele, Francesca; Lanzavecchia, Salvatore; Bellon, Pier Luigi
2004-11-01
VIVA is a software library that obtains low-resolution models of icosahedral viruses from projections observed at the electron microscope. VIVA works in a fully automatic way without any initial model. This feature eliminates the possibility of bias that could originate from the alignment of the projections to an external preliminary model. VIVA determines the viewing direction of the virus images by computation of sets of single particle reconstruction (SPR) followed by a variance analysis and classification of the 3D models. All structures are reduced in size to speed up computation. This limits the resolution of a VIVA reconstruction. The models obtained can be subsequently refined at best with use of standard libraries. Up today, VIVA has successfully solved the structure of all viruses tested, some of which being considered refractory particles. The VIVA library is written in 'C' language and is devised to run on widespread Linux computers.
NASA Astrophysics Data System (ADS)
Zhang, M.; Zhang, Y.; Lichtner, P. C.
2013-12-01
/tailing behavior of the FHM can generally be captured by the HSMs. At all the variances tested, the 8-unit upscaled model is always the most accurate. When the variance is low to moderate, this model can provide accurate to adequate predictions of all the FHM plume moments. In addition, upscaled dispersivities computed with the stochastic versus deterministic techniques yield similar solute predictions, which suggest that in this analysis, an ergodic transport regime has emerged. However, when the variance of ln(k) increases to 4.5, the upscaled dispersivities predicted by the stochastic methods result in significant upstream dispersion that is nonphysical. In this case, the HSMs cannot capture the FHM plume moments for the given ln(K) variance. In summary, simulation results suggest that the upscaling dispersivity can be used to accurately capture solute transport in low ln(K) variance systems but fails to describe the solute motion if system variance is high. Reference: Mingkan Zhang, and Ye Zhang, Multiscale, Multi-variance Dispersivity Upscaling for A Three-Dimensional Hierarchical Aquifer: Developing and Testing a Parallel Random Walk Method with a Drift Term in the Dispersion Tensor, Water Resources Research, in preparation.
Arachnoiditis ossificans and syringomyelia: A unique presentation
Opalak, Charles F.; Opalak, Michael E.
2015-01-01
Background: Arachnoiditis ossificans (AO) is a rare disorder that was differentiated from leptomeningeal calcification by Kaufman and Dunsmore in 1971. It generally presents with progressive lower extremity myelopathy. Though the underlying etiology has yet to be fully described, it has been associated with various predisposing factors including vascular malformations, previous intradural surgery, myelograms, and adhesive arachnoiditis. Associated conditions include syringomyelia and arachnoid cyst. The preferred diagnostic method is noncontrast computed tomography (CT). Surgical intervention is still controversial and can include decompression and duroplasty or durotomy. Case Description: The authors report the case of a 62-year-old male with a history of paraplegia who presented with a urinary tract infection and dysautonomia. His past surgical history was notable for a C4–C6 anterior fusion and an intrathecal phenol injection for spasticity. A magnetic resonance image (MR) also demonstrated a T6-conus syringx. At surgery, there was significant ossification of the arachnoid/dura, which was removed. After a drain was placed in the syrinx, there was a significant neurologic improvement. Conclusion: This case demonstrates a unique presentation of AO and highlights the need for CT imaging when a noncommunicating syringx is identified. In addition, surgical decompression can achieve good results when AO is associated with concurrent compressive lesions. PMID:26693389
Unique Ganglioside Recognition Strategies for Clostridial Neurotoxins
Benson, Marc A.; Fu, Zhuji; Kim, Jung-Ja P.; Baldwin, Michael R.
2012-03-15
Botulinum neurotoxins (BoNTs) and tetanus neurotoxin are the causative agents of the paralytic diseases botulism and tetanus, respectively. The potency of the clostridial neurotoxins (CNTs) relies primarily on their highly specific binding to nerve terminals and cleavage of SNARE proteins. Although individual CNTs utilize distinct proteins for entry, they share common ganglioside co-receptors. Here, we report the crystal structure of the BoNT/F receptor-binding domain in complex with the sugar moiety of ganglioside GD1a. GD1a binds in a shallow groove formed by the conserved peptide motif E ... H ... SXWY ... G, with additional stabilizing interactions provided by two arginine residues. Comparative analysis of BoNT/F with other CNTs revealed several differences in the interactions of each toxin with ganglioside. Notably, exchange of BoNT/F His-1241 with the corresponding lysine residue of BoNT/E resulted in increased affinity for GD1a and conferred the ability to bind ganglioside GM1a. Conversely, BoNT/E was not able to bind GM1a, demonstrating a discrete mechanism of ganglioside recognition. These findings provide a structural basis for ganglioside binding among the CNTs and show that individual toxins utilize unique ganglioside recognition strategies.
Technology Transfer Automated Retrieval System (TEKTRAN)
UV spectral fingerprints, in combination with analysis of variance-principal components analysis (ANOVA-PCA), was used to identify sources of variance in 7 broccoli samples composed of two cultivars and seven different growing condition (four levels of Se irrigation, organic farming, and convention...
The association of Kienbock's disease and ulnar variance in the Iranian population.
Afshar, A; Aminzadeh-Gohari, A; Yekta, Z
2013-06-01
We retrospectively determined the distribution of ulnar variance in 60 patients with Kienböck's disease. We also measured the ulnar variances in 400 standard wrist radiographs in the normal adult population. The mean ulnar variance of the Kienböck's group was -1.1 mm (SD 1.7) and the mean ulnar variance of the general population was +0.7 (SD 1.5), which was significantly different. In the Kienböck's disease group there were 38 (63%) with ulnar negative, 16 (27%) neutral and six (10%) with ulnar positive variance. The preponderance of ulnar negative variance was statistically significant. There was an association between ulnar negative variance and the development of Kienböck's disease in this study.
Constructing Dense Graphs with Unique Hamiltonian Cycles
ERIC Educational Resources Information Center
Lynch, Mark A. M.
2012-01-01
It is not difficult to construct dense graphs containing Hamiltonian cycles, but it is difficult to generate dense graphs that are guaranteed to contain a unique Hamiltonian cycle. This article presents an algorithm for generating arbitrarily large simple graphs containing "unique" Hamiltonian cycles. These graphs can be turned into dense graphs…
Beyond the GUM: variance-based sensitivity analysis in metrology
NASA Astrophysics Data System (ADS)
Lira, I.
2016-07-01
Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand.
Cosmic variance and the measurement of the local Hubble parameter.
Marra, Valerio; Amendola, Luca; Sawicki, Ignacy; Valkenburg, Wessel
2013-06-14
There is an approximately 9% discrepancy, corresponding to 2.4 σ, between two independent constraints on the expansion rate of the Universe: one indirectly arising from the cosmic microwave background and baryon acoustic oscillations and one more directly obtained from local measurements of the relation between redshifts and distances to sources. We argue that by taking into account the local gravitational potential at the position of the observer this tension--strengthened by the recent Planck results--is partially relieved and the concordance of the Standard Model of cosmology increased. We estimate that measurements of the local Hubble constant are subject to a cosmic variance of about 2.4% (limiting the local sample to redshifts z > 0.010) or 1.3% (limiting it to z > 0.023), a more significant correction than that taken into account already. Nonetheless, we show that one would need a very rare fluctuation to fully explain the offset in the Hubble rates. If this tension is further strengthened, a cosmology beyond the Standard Model may prove necessary.
Linear constraint minimum variance beamformer functional magnetic resonance inverse imaging.
Lin, Fa-Hsuan; Witzel, Thomas; Zeffiro, Thomas A; Belliveau, John W
2008-11-01
Accurate estimation of the timing of neural activity is required to fully model the information flow among functionally specialized regions whose joint activity underlies perception, cognition and action. Attempts to detect the fine temporal structure of task-related activity would benefit from functional imaging methods allowing higher sampling rates. Spatial filtering techniques have been used in magnetoencephalography source imaging applications. In this work, we use the linear constraint minimal variance (LCMV) beamformer localization method to reconstruct single-shot volumetric functional magnetic resonance imaging (fMRI) data using signals acquired simultaneously from all channels of a high density radio-frequency (RF) coil array. The LCMV beamformer method generalizes the existing volumetric magnetic resonance inverse imaging (InI) technique, achieving higher detection sensitivity while maintaining whole-brain spatial coverage and 100 ms temporal resolution. In this paper, we begin by introducing the LCMV reconstruction formulation and then quantitatively assess its performance using both simulated and empirical data. To demonstrate the sensitivity and inter-subject reliability of volumetric LCMV InI, we employ an event-related design to probe the spatial and temporal properties of task-related hemodynamic signal modulations in primary visual cortex. Compared to minimum-norm estimate (MNE) reconstructions, LCMV offers better localization accuracy and superior detection sensitivity. Robust results from both single subject and group analyses demonstrate the excellent sensitivity and specificity of volumetric InI in detecting the spatial and temporal structure of task-related brain activity.
Analysis of variance (ANOVA) models in lower extremity wounds.
Reed, James F
2003-06-01
Consider a study in which 2 new treatments are being compared with a control group. One way to compare outcomes would simply be to compare the 2 treatments with the control and the 2 treatments against each using 3 Student t tests (t test). If we were to compare 4 treatment groups, then we would need to use 6 t tests. The difficulty with using multiple t tests is that as the number of groups increases, so will the likelihood of finding a difference between any pair of groups simply by change when no real difference exists by definition a Type I error. If we were to perform 3 separate t tests each at alpha = .05, the experimental error rate increases to .14. As the number of multiple t tests increases, the experiment-wise error rate increases rather rapidly. The solution to the experimental error rate problem is to use analysis of variance (ANOVA) methods. Three basic ANOVA designs are reviewed that give hypothetical examples drawn from the literature to illustrate single-factor ANOVA, repeated measures ANOVA, and randomized block ANOVA. "No frills" SPSS or SAS code for each of these designs and examples used are available from the author on request.
A model selection approach to analysis of variance and covariance.
Alber, Susan A; Weiss, Robert E
2009-06-15
An alternative to analysis of variance is a model selection approach where every partition of the treatment means into clusters with equal value is treated as a separate model. The null hypothesis that all treatments are equal corresponds to the partition with all means in a single cluster. The alternative hypothesis correspond to the set of all other partitions of treatment means. A model selection approach can also be used for a treatment by covariate interaction, where the null hypothesis and each alternative correspond to a partition of treatments into clusters with equal covariate effects. We extend the partition-as-model approach to simultaneous inference for both treatment main effect and treatment interaction with a continuous covariate with separate partitions for the intercepts and treatment-specific slopes. The model space is the Cartesian product of the intercept partition and the slope partition, and we develop five joint priors for this model space. In four of these priors the intercept and slope partition are dependent. We advise on setting priors over models, and we use the model to analyze an orthodontic data set that compares the frictional resistance created by orthodontic fixtures.
Analysis of variance in neuroreceptor ligand imaging studies.
Ko, Ji Hyun; Reilhac, Anthonin; Ray, Nicola; Rusjan, Pablo; Bloomfield, Peter; Pellecchia, Giovanna; Houle, Sylvain; Strafella, Antonio P
2011-01-01
Radioligand positron emission tomography (PET) with dual scan paradigms can provide valuable insight into changes in synaptic neurotransmitter concentration due to experimental manipulation. The residual t-test has been utilized to improve the sensitivity of the t-test in PET studies. However, no further development of statistical tests using residuals has been proposed so far to be applied in cases when there are more than two conditions. Here, we propose the residual f-test, a one-way analysis of variance (ANOVA), and examine its feasibility using simulated [(11)C]raclopride PET data. We also re-visit data from our previously published [(11)C]raclopride PET study, in which 10 individuals underwent three PET scans under different conditions. We found that the residual f-test is superior in terms of sensitivity than the conventional f-test while still controlling for type 1 error. The test will therefore allow us to reliably test hypotheses in the smaller sample sizes often used in explorative PET studies.
Prediction of membrane protein types using maximum variance projection
NASA Astrophysics Data System (ADS)
Wang, Tong; Yang, Jie
2011-05-01
Predicting membrane protein types has a positive influence on further biological function analysis. To quickly and efficiently annotate the type of an uncharacterized membrane protein is a challenge. In this work, a system based on maximum variance projection (MVP) is proposed to improve the prediction performance of membrane protein types. The feature extraction step is based on a hybridization representation approach by fusing Position-Specific Score Matrix composition. The protein sequences are quantized in a high-dimensional space using this representation strategy. Some problems will be brought when analysing these high-dimensional feature vectors such as high computing time and high classifier complexity. To solve this issue, MVP, a novel dimensionality reduction algorithm is introduced by extracting the essential features from the high-dimensional feature space. Then, a K-nearest neighbour classifier is employed to identify the types of membrane proteins based on their reduced low-dimensional features. As a result, the jackknife and independent dataset test success rates of this model reach 86.1 and 88.4%, respectively, and suggest that the proposed approach is very promising for predicting membrane proteins types.
A fast minimum variance beamforming method using principal component analysis.
Kim, Kyuhong; Park, Suhyun; Kim, Jungho; Park, Sung-Bae; Bae, MooHo
2014-06-01
Minimum variance (MV) beamforming has been studied for improving the performance of a diagnostic ultrasound imaging system. However, it is not easy for the MV beamforming to be implemented in a real-time ultrasound imaging system because of the enormous amount of computation time associated with the covariance matrix inversion. In this paper, to address this problem, we propose a new fast MV beamforming method that almost optimally approximates the MV beamforming while reducing the computational complexity greatly through dimensionality reduction using principal component analysis (PCA). The principal components are estimated offline from pre-calculated conventional MV weights. Thus, the proposed method does not directly calculate the MV weights but approximates them by a linear combination of a few selected dominant principal components. The combinational weights are calculated in almost the same way as in MV beamforming, but in the transformed domain of beamformer input signal by the PCA, where the dimension of the transformed covariance matrix is identical to the number of some selected principal component vectors. Both computer simulation and experiment were carried out to verify the effectiveness of the proposed method with echo signals from simulation as well as phantom and in vivo experiments. It is confirmed that our method can reduce the dimension of the covariance matrix down to as low as 2 × 2 while maintaining the good image quality of MV beamforming.
Fast Minimum Variance Beamforming Based on Legendre Polynomials.
Bae, MooHo; Park, Sung Bae; Kwon, Sung Jae
2016-09-01
Currently, minimum variance beamforming (MV) is actively investigated as a method that can improve the performance of an ultrasound beamformer, in terms of the lateral and contrast resolution. However, this method has the disadvantage of excessive computational complexity since the inverse spatial covariance matrix must be calculated. Some noteworthy methods among various attempts to solve this problem include beam space adaptive beamforming methods and the fast MV method based on principal component analysis, which are similar in that the original signal in the element space is transformed to another domain using an orthonormal basis matrix and the dimension of the covariance matrix is reduced by approximating the matrix only with important components of the matrix, hence making the inversion of the matrix very simple. Recently, we proposed a new method with further reduced computational demand that uses Legendre polynomials as the basis matrix for such a transformation. In this paper, we verify the efficacy of the proposed method through Field II simulations as well as in vitro and in vivo experiments. The results show that the approximation error of this method is less than or similar to those of the above-mentioned methods and that the lateral response of point targets and the contrast-to-speckle noise in anechoic cysts are also better than or similar to those methods when the dimensionality of the covariance matrices is reduced to the same dimension.
Anatomically constrained minimum variance beamforming applied to EEG.
Murzin, Vyacheslav; Fuchs, Armin; Kelso, J A Scott
2011-10-01
Neural activity as measured non-invasively using electroencephalography (EEG) or magnetoencephalography (MEG) originates in the cortical gray matter. In the cortex, pyramidal cells are organized in columns and activated coherently, leading to current flow perpendicular to the cortical surface. In recent years, beamforming algorithms have been developed, which use this property as an anatomical constraint for the locations and directions of potential sources in MEG data analysis. Here, we extend this work to EEG recordings, which require a more sophisticated forward model due to the blurring of the electric current at tissue boundaries where the conductivity changes. Using CT scans, we create a realistic three-layer head model consisting of tessellated surfaces that represent the cerebrospinal fluid-skull, skull-scalp, and scalp-air boundaries. The cortical gray matter surface, the anatomical constraint for the source dipoles, is extracted from MRI scans. EEG beamforming is implemented on simulated sets of EEG data for three different head models: single spherical, multi-shell spherical, and multi-shell realistic. Using the same conditions for simulated EEG and MEG data, it is shown (and quantified by receiver operating characteristic analysis) that EEG beamforming detects radially oriented sources, to which MEG lacks sensitivity. By merging several techniques, such as linearly constrained minimum variance beamforming, realistic geometry forward solutions, and cortical constraints, we demonstrate it is possible to localize and estimate the dynamics of dipolar and spatially extended (distributed) sources of neural activity.
Osteotomy for Sigmoid Notch Obliquity and Ulnar Positive Variance
Dickson, Lisa M.; Tham, Stephen K. Y.
2014-01-01
Background Several causes of ulnar wrist pain have been described. One uncommon cause is ulnar carpal abutment associated with a notable distally facing sigmoid notch (reverse obliquity). Such an abnormality cannot be treated with ulnar shortening alone because it will result in incongruity of the distal radioulnar joint (DRUJ). Case Description A 23-year-old woman presented with ulnar wrist pain aggravated by forearm rotation. Ten years earlier she had sustained a distal radius fracture that was conservatively treated. Examination revealed mild tenderness at the DRUJ and decreased wrist flexion and grip strength on the affected side. Radiographic examination demonstrated 1 cm ulnar positive variance, ulnar styloid nonunion, and a 37° reverse obliquity of the sigmoid notch. The patient was treated with ulnar shortening and rotation sigmoid notch osteotomy to realign the sigmoid notch with the ulnar head. Literature Review Sigmoid notch incongruity is one of several causes of wrist pain after distal radius fracture. Traditional salvage options for DRUJ arthritis may result in loss of grip strength, painful ulnar shaft instability, or reossification and are not acceptable options in the young patient. Sigmoid notch osteotomy or osteoplasty have been described to correct the shape of the sigmoid notch in the axial plane. Clinical Relevance We report a coronal plane osteotomy of the sigmoid notch to treat reverse obliquity of the sigmoid notch associated with ulnar carpal abutment. The rotation osteotomy described is particularly useful for patients in whom a salvage procedure is not warranted. PMID:24533247
Cost/variance optimization for human exposure assessment studies.
Whitmore, Roy W; Pellizzari, Edo D; Zelon, Harvey S; Michael, Larry C; Quackenboss, James J
2005-11-01
The National Human Exposure Assessment Survey (NHEXAS) field study in EPA Region V (one of three NHEXAS field studies) provides extensive exposure data on a representative sample of 249 residents of the Great Lakes states. Concentration data were obtained for both metals and volatile organic compounds (VOCs) from multiple environmental media and from human biomarkers. A variance model for the logarithms of concentration measurements is used to define intraclass correlations between observations within primary sampling units (PSUs) (nominally counties) and within secondary sampling units (SSUs) (nominally Census blocks). A model for the total cost of the study is developed in terms of fixed costs and variable costs per PSU, SSU, and participant. Intraclass correlations are estimated for media and analytes with sufficient sample sizes. We demonstrate how the intraclass correlations and variable cost components can be used to determine the sample allocation that minimizes cost while achieving pre-specified precision constraints for future studies that monitor environmental concentrations and human exposures for metals and VOCs.
Neutrality and the Response of Rare Species to Environmental Variance
Benedetti-Cecchi, Lisandro; Bertocci, Iacopo; Vaselli, Stefano; Maggi, Elena; Bulleri, Fabio
2008-01-01
Neutral models and differential responses of species to environmental heterogeneity offer complementary explanations of species abundance distribution and dynamics. Under what circumstances one model prevails over the other is still a matter of debate. We show that the decay of similarity over time in rocky seashore assemblages of algae and invertebrates sampled over a period of 16 years was consistent with the predictions of a stochastic model of ecological drift at time scales larger than 2 years, but not at time scales between 3 and 24 months when similarity was quantified with an index that reflected changes in abundance of rare species. A field experiment was performed to examine whether assemblages responded neutrally or non-neutrally to changes in temporal variance of disturbance. The experimental results did not reject neutrality, but identified a positive effect of intermediate levels of environmental heterogeneity on the abundance of rare species. This effect translated into a marked decrease in the characteristic time scale of species turnover, highlighting the role of rare species in driving assemblage dynamics in fluctuating environments. PMID:18648545
Minding Impacting Events in a Model of Stochastic Variance
Duarte Queirós, Sílvio M.; Curado, Evaldo M. F.; Nobre, Fernando D.
2011-01-01
We introduce a generalization of the well-known ARCH process, widely used for generating uncorrelated stochastic time series with long-term non-Gaussian distributions and long-lasting correlations in the (instantaneous) standard deviation exhibiting a clustering profile. Specifically, inspired by the fact that in a variety of systems impacting events are hardly forgot, we split the process into two different regimes: a first one for regular periods where the average volatility of the fluctuations within a certain period of time is below a certain threshold, , and another one when the local standard deviation outnumbers . In the former situation we use standard rules for heteroscedastic processes whereas in the latter case the system starts recalling past values that surpassed the threshold. Our results show that for appropriate parameter values the model is able to provide fat tailed probability density functions and strong persistence of the instantaneous variance characterized by large values of the Hurst exponent (), which are ubiquitous features in complex systems. PMID:21483864
Lung vasculature imaging using speckle variance optical coherence tomography
NASA Astrophysics Data System (ADS)
Cua, Michelle; Lee, Anthony M. D.; Lane, Pierre M.; McWilliams, Annette; Shaipanich, Tawimas; MacAulay, Calum E.; Yang, Victor X. D.; Lam, Stephen
2012-02-01
Architectural changes in and remodeling of the bronchial and pulmonary vasculature are important pathways in diseases such as asthma, chronic obstructive pulmonary disease (COPD), and lung cancer. However, there is a lack of methods that can find and examine small bronchial vasculature in vivo. Structural lung airway imaging using optical coherence tomography (OCT) has previously been shown to be of great utility in examining bronchial lesions during lung cancer screening under the guidance of autofluorescence bronchoscopy. Using a fiber optic endoscopic OCT probe, we acquire OCT images from in vivo human subjects. The side-looking, circumferentially-scanning probe is inserted down the instrument channel of a standard bronchoscope and manually guided to the imaging location. Multiple images are collected with the probe spinning proximally at 100Hz. Due to friction, the distal end of the probe does not spin perfectly synchronous with the proximal end, resulting in non-uniform rotational distortion (NURD) of the images. First, we apply a correction algorithm to remove NURD. We then use a speckle variance algorithm to identify vasculature. The initial data show a vascaulture density in small human airways similar to what would be expected.
Holocene history of ENSO variance and asymmetry in the eastern tropical Pacific.
Carré, Matthieu; Sachs, Julian P; Purca, Sara; Schauer, Andrew J; Braconnot, Pascale; Falcón, Rommel Angeles; Julien, Michèle; Lavallée, Danièle
2014-08-29
Understanding the response of the El Niño-Southern Oscillation (ENSO) to global warming requires quantitative data on ENSO under different climate regimes. Here, we present a reconstruction of ENSO in the eastern tropical Pacific spanning the past 10,000 years derived from oxygen isotopes in fossil mollusk shells from Peru. We found that ENSO variance was close to the modern level in the early Holocene and severely damped ~4000 to 5000 years ago. In addition, ENSO variability was skewed toward cold events along coastal Peru 6700 to 7500 years ago owing to a shift of warm anomalies toward the Central Pacific. The modern ENSO regime was established ~3000 to 4500 years ago. We conclude that ENSO was sensitive to changes in climate boundary conditions during the Holocene, including but not limited to insolation.
NASA Astrophysics Data System (ADS)
Li, Min-Yang; Yang, Mingchia; Vargas, Emily; Neff, Kyle; Vanli, Arda; Liang, Richard
2016-09-01
One of the major challenges towards controlling the transfer of electrical and mechanical properties of nanotubes into nanocomposites is the lack of adequate measurement systems to quantify the variations in bulk properties while the nanotubes were used as the reinforcement material. In this study, we conducted one-way analysis of variance (ANOVA) on thickness and conductivity measurements. By analyzing the data collected from both experienced and inexperienced operators, we found some operation details users might overlook that resulted in variations, since conductivity measurements of CNT thin films are very sensitive to thickness measurements. In addition, we demonstrated how issues in measurements damaged samples and limited the number of replications resulting in large variations in the electrical conductivity measurement results. Based on this study, we proposed a faster, more reliable approach to measure the thickness of CNT thin films that operators can follow to make these measurement processes less dependent on operator skills.
The effects of different quantum feedback types on the tightness of the variance-based uncertainty
NASA Astrophysics Data System (ADS)
Zheng, Xiao; Zhang, Guo-Feng
2017-03-01
The effect of the quantum feedback on the tightness of the variance-based uncertainty, the possibility of using quantum feedback to prepare the state with a better tightness, and the relationship between the tightness of the uncertainty and the mixedness of the system are studied. It is found that the tightness of Schrodinger-Robertson uncertainty (SUR) relation has a strict liner relationship with the mixedness of the system. As for the Robertson uncertainty relation (RUR), we find that the tightness can be enhanced by tuning the feedback at the beginning of the evolution. In addition, we deduce that the tightness of RUR has an inverse relationship with the mixedness and the relationship turns into a strict linear one when the system reach the steady state.
Unique Challenges Testing SDRs for Space
NASA Technical Reports Server (NTRS)
Chelmins, David; Downey, Joseph A.; Johnson, Sandra K.; Nappier, Jennifer M.
2013-01-01
This paper describes the approach used by the Space Communication and Navigation (SCaN) Testbed team to qualify three Software Defined Radios (SDR) for operation in space and the characterization of the platform to enable upgrades on-orbit. The three SDRs represent a significant portion of the new technologies being studied on board the SCAN Testbed, which is operating on an external truss on the International Space Station (ISS). The SCaN Testbed provides experimenters an opportunity to develop and demonstrate experimental waveforms and applications for communication, networking, and navigation concepts and advance the understanding of developing and operating SDRs in space. Qualifying a Software Defined Radio for the space environment requires additional consideration versus a hardware radio. Tests that incorporate characterization of the platform to provide information necessary for future waveforms, which might exercise extended capabilities of the hardware, are needed. The development life cycle for the radio follows the software development life cycle, where changes can be incorporated at various stages of development and test. It also enables flexibility to be added with minor additional effort. Although this provides tremendous advantages, managing the complexity inherent in a software implementation requires a testing beyond the traditional hardware radio test plan. Due to schedule and resource limitations and parallel development activities, the subsystem testing of the SDRs at the vendor sites was primarily limited to typical fixed transceiver type of testing. NASA s Glenn Research Center (GRC) was responsible for the integration and testing of the SDRs into the SCaN Testbed system and conducting the investigation of the SDR to advance the technology to be accepted by missions. This paper will describe the unique tests that were conducted at both the subsystem and system level, including environmental testing, and present results. For example, test
Unique Challenges Testing SDRs for Space
NASA Technical Reports Server (NTRS)
Johnson, Sandra; Chelmins, David; Downey, Joseph; Nappier, Jennifer
2013-01-01
This paper describes the approach used by the Space Communication and Navigation (SCaN) Testbed team to qualify three Software Defined Radios (SDR) for operation in space and the characterization of the platform to enable upgrades on-orbit. The three SDRs represent a significant portion of the new technologies being studied on board the SCAN Testbed, which is operating on an external truss on the International Space Station (ISS). The SCaN Testbed provides experimenters an opportunity to develop and demonstrate experimental waveforms and applications for communication, networking, and navigation concepts and advance the understanding of developing and operating SDRs in space. Qualifying a Software Defined Radio for the space environment requires additional consideration versus a hardware radio. Tests that incorporate characterization of the platform to provide information necessary for future waveforms, which might exercise extended capabilities of the hardware, are needed. The development life cycle for the radio follows the software development life cycle, where changes can be incorporated at various stages of development and test. It also enables flexibility to be added with minor additional effort. Although this provides tremendous advantages, managing the complexity inherent in a software implementation requires a testing beyond the traditional hardware radio test plan. Due to schedule and resource limitations and parallel development activities, the subsystem testing of the SDRs at the vendor sites was primarily limited to typical fixed transceiver type of testing. NASA's Glenn Research Center (GRC) was responsible for the integration and testing of the SDRs into the SCaN Testbed system and conducting the investigation of the SDR to advance the technology to be accepted by missions. This paper will describe the unique tests that were conducted at both the subsystem and system level, including environmental testing, and present results. For example, test
The ARMC5 gene shows extensive genetic variance in primary macronodular adrenocortical hyperplasia
Correa, Ricardo; Zilbermint, Mihail; Berthon, Annabel; Espiard, Stephanie; Batsis, Maria; Papadakis, Georgios Z.; Xekouki, Paraskevi; Lodish, Maya B.; Bertherat, Jerome; Faucz, Fabio R.; Stratakis, Constantine A.
2015-01-01
Objective Primary macronodular adrenal hyperplasia (PMAH) is a rare type of Cushing’s syndrome (CS) that results in increased cortisol production and bilateral enlargement of the adrenal glands. Recent work showed that the disease may be caused by germline and somatic mutations in the ARMC5 gene, a likely tumor-suppressor gene (TSG). We investigated 20 different adrenal nodules from one patient with PMAH for ARMC5 somatic sequence changes. Design All of the nodules where obtained from a single patient who underwent bilateral adrenalectomy. DNA was extracted by standard protocols and the ARMC5 sequence was determined by the Sanger method. Results Sixteen of 20 adrenocortical nodules harbored, in addition to what appeared to be the germline mutation, a second somatic variant. The p.Trp476* sequence change was present in all 20 nodules, as well as in normal tissue from the adrenal capsule, identifying it as the germline defect; each of the 16 other variants were found in different nodules: 6 were frame shift, 4 were missense, 3 were nonsense, and 1 was a splice site variation. Allelic losses were confirmed in 2 of the nodules. Conclusion This is the most genetic variance of the ARMC5 gene ever described in a single patient with PMAH: each of 16 adrenocortical nodules had a second new, “private”, and -in most cases- completely inactivating ARMC5 defect, in addition to the germline mutation. The data support the notion that ARMC5 is a TSG that needs a second, somatic hit, to mediate tumorigenesis leading to polyclonal nodularity; however, the driver of this extensive genetic variance of the second ARMC5 allele in adrenocortical tissue in the context of a germline defect and PMAH remains a mystery. PMID:26162405
Unique antitumor property of the Mg-Ca-Sr alloys with addition of Zn
Wu, Yuanhao; He, Guanping; Zhang, Yu; Liu, Yang; Li, Mei; Wang, Xiaolan; Li, Nan; Li, Kang; Zheng, Guan; Zheng, Yufeng; Yin, Qingshui
2016-01-01
In clinical practice, tumor recurrence and metastasis after orthopedic prosthesis implantation is an intensely troublesome matter. Therefore, to develop implant materials with antitumor property is extremely necessary and meaningful. Magnesium (Mg) alloys possess superb biocompatibility, mechanical property and biodegradability in orthopedic applications. However, whether they possess antitumor property had seldom been reported. In recent years, it showed that zinc (Zn) not only promote the osteogenic activity but also exhibit good antitumor property. In our present study, Zn was selected as an alloying element for the Mg-1Ca-0.5Sr alloy to develop a multifunctional material with antitumor property. We investigated the influence of the Mg-1Ca-0.5Sr-xZn (x = 0, 2, 4, 6 wt%) alloys extracts on the proliferation rate, cell apoptosis, migration and invasion of the U2OS cell line. Our results show that Zn containing Mg alloys extracts inhibit the cell proliferation by alteration the cell cycle and inducing cell apoptosis via the activation of the mitochondria pathway. The cell migration and invasion property were also suppressed by the activation of MAPK (mitogen-activated protein kinase) pathway. Our work suggests that the Mg-1Ca-0.5Sr-6Zn alloy is expected to be a promising orthopedic implant in osteosarcoma limb-salvage surgery for avoiding tumor recurrence and metastasis. PMID:26907515
Unique antitumor property of the Mg-Ca-Sr alloys with addition of Zn.
Wu, Yuanhao; He, Guanping; Zhang, Yu; Liu, Yang; Li, Mei; Wang, Xiaolan; Li, Nan; Li, Kang; Zheng, Guan; Zheng, Yufeng; Yin, Qingshui
2016-02-24
In clinical practice, tumor recurrence and metastasis after orthopedic prosthesis implantation is an intensely troublesome matter. Therefore, to develop implant materials with antitumor property is extremely necessary and meaningful. Magnesium (Mg) alloys possess superb biocompatibility, mechanical property and biodegradability in orthopedic applications. However, whether they possess antitumor property had seldom been reported. In recent years, it showed that zinc (Zn) not only promote the osteogenic activity but also exhibit good antitumor property. In our present study, Zn was selected as an alloying element for the Mg-1Ca-0.5Sr alloy to develop a multifunctional material with antitumor property. We investigated the influence of the Mg-1Ca-0.5Sr-xZn (x = 0, 2, 4, 6 wt%) alloys extracts on the proliferation rate, cell apoptosis, migration and invasion of the U2OS cell line. Our results show that Zn containing Mg alloys extracts inhibit the cell proliferation by alteration the cell cycle and inducing cell apoptosis via the activation of the mitochondria pathway. The cell migration and invasion property were also suppressed by the activation of MAPK (mitogen-activated protein kinase) pathway. Our work suggests that the Mg-1Ca-0.5Sr-6Zn alloy is expected to be a promising orthopedic implant in osteosarcoma limb-salvage surgery for avoiding tumor recurrence and metastasis.
Unique antitumor property of the Mg-Ca-Sr alloys with addition of Zn
NASA Astrophysics Data System (ADS)
Wu, Yuanhao; He, Guanping; Zhang, Yu; Liu, Yang; Li, Mei; Wang, Xiaolan; Li, Nan; Li, Kang; Zheng, Guan; Zheng, Yufeng; Yin, Qingshui
2016-02-01
In clinical practice, tumor recurrence and metastasis after orthopedic prosthesis implantation is an intensely troublesome matter. Therefore, to develop implant materials with antitumor property is extremely necessary and meaningful. Magnesium (Mg) alloys possess superb biocompatibility, mechanical property and biodegradability in orthopedic applications. However, whether they possess antitumor property had seldom been reported. In recent years, it showed that zinc (Zn) not only promote the osteogenic activity but also exhibit good antitumor property. In our present study, Zn was selected as an alloying element for the Mg-1Ca-0.5Sr alloy to develop a multifunctional material with antitumor property. We investigated the influence of the Mg-1Ca-0.5Sr-xZn (x = 0, 2, 4, 6 wt%) alloys extracts on the proliferation rate, cell apoptosis, migration and invasion of the U2OS cell line. Our results show that Zn containing Mg alloys extracts inhibit the cell proliferation by alteration the cell cycle and inducing cell apoptosis via the activation of the mitochondria pathway. The cell migration and invasion property were also suppressed by the activation of MAPK (mitogen-activated protein kinase) pathway. Our work suggests that the Mg-1Ca-0.5Sr-6Zn alloy is expected to be a promising orthopedic implant in osteosarcoma limb-salvage surgery for avoiding tumor recurrence and metastasis.
John, Samantha E.; Gurnani, Ashita S.; Bussell, Cara; Saurman, Jessica L.; Griffin, Jason W.; Gavett, Brandon E.
2016-01-01
Objective Two main approaches to the interpretation of cognitive test performance have been utilized for the characterization of disease: evaluating shared variance across tests, as with measures of severity, and evaluating the unique variance across tests, as with pattern and error analysis. Both methods provide necessary information, but the unique contributions of each are rarely considered. This study compares the two approaches on their ability to differentially diagnose with accuracy, while controlling for the influence of other relevant demographic and risk variables. Method Archival data requested from the NACC provided clinical diagnostic groups that were paired to one another through a genetic matching procedure. For each diagnostic pairing, two separate logistic regression models predicting clinical diagnosis were performed and compared on their predictive ability. The shared variance approach was represented through the latent phenotype δ, which served as the lone predictor in one set of models. The unique variance approach was represented through raw score values for the 12 neuropsychological test variables comprising δ, which served as the set of predictors in the second group of models. Results Examining the unique patterns of neuropsychological test performance across a battery of tests was the superior method of differentiating between competing diagnoses, and it accounted for 16-30% of the variance in diagnostic decision making. Conclusion Implications for clinical practice are discussed, including test selection and interpretation. PMID:27797542
Unique properties of Plasmodium falciparum porphobilinogen deaminase.
Nagaraj, Viswanathan Arun; Arumugam, Rajavel; Gopalakrishnan, Bulusu; Jyothsna, Yeleswarapu Sri; Rangarajan, Pundi N; Padmanaban, Govindarajan
2008-01-04
The hybrid pathway for heme biosynthesis in the malarial parasite proposes the involvement of parasite genome-coded enzymes of the pathway localized in different compartments such as apicoplast, mitochondria, and cytosol. However, knowledge on the functionality and localization of many of these enzymes is not available. In this study, we demonstrate that porphobilinogen deaminase encoded by the Plasmodium falciparum genome (PfPBGD) has several unique biochemical properties. Studies carried out with PfPBGD partially purified from parasite membrane fraction, as well as recombinant PfPBGD lacking N-terminal 64 amino acids expressed and purified from Escherichia coli cells (DeltaPfPBGD), indicate that both the proteins are catalytically active. Surprisingly, PfPBGD catalyzes the conversion of porphobilinogen to uroporphyrinogen III (UROGEN III), indicating that it also possesses uroporphyrinogen III synthase (UROS) activity, catalyzing the next step. This obviates the necessity to have a separate gene for UROS that has not been so far annotated in the parasite genome. Interestingly, DeltaPfP-BGD gives rise to UROGEN III even after heat treatment, although UROS from other sources is known to be heat-sensitive. Based on the analysis of active site residues, a DeltaPfPBGDL116K mutant enzyme was created and the specific activity of this recombinant mutant enzyme is 5-fold higher than DeltaPfPBGD. More interestingly, DeltaPfPBGDL116K catalyzes the formation of uroporphyrinogen I (UROGEN I) in addition to UROGEN III, indicating that with increased PBGD activity the UROS activity of PBGD may perhaps become rate-limiting, thus leading to non-enzymatic cyclization of preuroporphyrinogen to UROGEN I. PfPBGD is localized to the apicoplast and is catalytically very inefficient compared with the host red cell enzyme.
Local variance for multi-scale analysis in geomorphometry
Drăguţ, Lucian; Eisank, Clemens; Strasser, Thomas
2011-01-01
Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements. PMID:21779138
Low complex subspace minimum variance beamformer for medical ultrasound imaging.
Deylami, Ali Mohades; Asl, Babak Mohammadzadeh
2016-03-01
Minimum variance (MV) beamformer enhances the resolution and contrast in the medical ultrasound imaging at the expense of higher computational complexity with respect to the non-adaptive delay-and-sum beamformer. The major complexity arises from the estimation of the L×L array covariance matrix using spatial averaging, which is required to more accurate estimation of the covariance matrix of correlated signals, and inversion of it, which is required for calculating the MV weight vector which are as high as O(L(2)) and O(L(3)), respectively. Reducing the number of array elements decreases the computational complexity but degrades the imaging resolution. In this paper, we propose a subspace MV beamformer which preserves the advantages of the MV beamformer with lower complexity. The subspace MV neglects some rows of the array covariance matrix instead of reducing the array size. If we keep η rows of the array covariance matrix which leads to a thin non-square matrix, the weight vector of the subspace beamformer can be achieved in the same way as the MV obtains its weight vector with lower complexity as high as O(η(2)L). More calculations would be saved because an η×L covariance matrix must be estimated instead of a L×L. We simulated a wire targets phantom and a cyst phantom to evaluate the performance of the proposed beamformer. The results indicate that we can keep about 16 from 43 rows of the array covariance matrix which reduces the order of complexity to 14% while the image resolution is still comparable to that of the standard MV beamformer. We also applied the proposed method to an experimental RF data and showed that the subspace MV beamformer performs like the standard MV with lower computational complexity.
Onions: a source of unique dietary flavonoids.
Slimestad, Rune; Fossen, Torgils; Vågen, Ingunn Molund
2007-12-12
Onion bulbs (Allium cepa L.) are among the richest sources of dietary flavonoids and contribute to a large extent to the overall intake of flavonoids. This review includes a compilation of the existing qualitative and quantitative information about flavonoids reported to occur in onion bulbs, including NMR spectroscopic evidence used for structural characterization. In addition, a summary is given to index onion cultivars according to their content of flavonoids measured as quercetin. Only compounds belonging to the flavonols, the anthocyanins, and the dihydroflavonols have been reported to occur in onion bulbs. Yellow onions contain 270-1187 mg of flavonols per kilogram of fresh weight (FW), whereas red onions contain 415-1917 mg of flavonols per kilogram of FW. Flavonols are the predominant pigments of onions. At least 25 different flavonols have been characterized, and quercetin derivatives are the most important ones in all onion cultivars. Their glycosyl moieties are almost exclusively glucose, which is mainly attached to the 4', 3, and/or 7-positions of the aglycones. Quercetin 4'-glucoside and quercetin 3,4'-diglucoside are in most cases reported as the main flavonols in recent literature. Analogous derivatives of kaempferol and isorhamnetin have been identified as minor pigments. Recent reports indicate that the outer dry layers of onion bulbs contain oligomeric structures of quercetin in addition to condensation products of quercetin and protocatechuic acid. The anthocyanins of red onions are mainly cyanidin glucosides acylated with malonic acid or nonacylated. Some of these pigments facilitate unique structural features like 4'-glycosylation and unusual substitution patterns of sugar moieties. Altogether at least 25 different anthocyanins have been reported from red onions, including two novel 5-carboxypyranocyanidin-derivatives. The quantitative content of anthocyanins in some red onion cultivars has been reported to be approximately 10% of the total
Dependence effects in unique signal transmission
Cooper, J.A.
1988-04-01
''Unique Signals'' are communicated from a source to a ''strong link'' safety device in a weapon by means of one or more digital communication channels. The probability that the expected unique signal pattern could be generated accidentally (e.g., due to an abnormal environment) would be an important measure. A probabilistic assessment of this likelihood is deceptive, because it depends on characteristics of the other traffic on the communication channel. One such characteristic that is frequently neglected in analysis is dependence. This report gives a mathematical model for dependence; cites some of the ways in which dependence can increase the likelihood of inadvertent unique signal pattern generation; and suggests that communicating each unique signal ''event'' at the highest level of protocol in the communication system minimizes dependence effects. 3 refs., 4 figs.
Meta-analysis of binary data: which within study variance estimate to use?
Chang, B H; Waternaux, C; Lipsitz, S
2001-07-15
We applied a mixed effects model to investigate between- and within-study variation in improvement rates of 180 schizophrenia outcome studies. The between-study variation was explained by the fixed study characteristics and an additional random study effect. Both rate difference and logit models were used. For a binary proportion outcome p(i) with sample size n(i) in the ith study, (circumflexp(i)(1-circumflexp(i))n)(-1) is the usual estimate of the within-study variance sigma(i)(2) in the logit model, where circumflexpi) is the sample mean of the binary outcome for subjects in study i. This estimate can be highly correlated with logit(circumflexp(i)). We used (macronp(i)(1-macronp)n(i))(-1) as an alternative estimate of sigma(i)(2), where macronp is the weighted mean of circumflexp(i)'s. We estimated regression coefficients (beta) of the fixed effects and the variance (tau(2)) of the random study effect using a quasi-likelihood estimating equations approach. Using the schizophrenia meta-analysis data, we demonstrated how the choice of the estimate of sigma(2)(i) affects the resulting estimates of beta and tau(2). We also conducted a simulation study to evaluate the performance of the two estimates of sigma(2)(i) in different conditions, where the conditions vary by number of studies and study size. Using the schizophrenia meta-analysis data, the estimates of beta and tau(2) were quite different when different estimates of sigma(2)(i) were used in the logit model. The simulation study showed that the estimates of beta and tau(2) were less biased, and the 95 per cent CI coverage was closer to 95 per cent when the estimate of sigma(2)(i) was (macronp(1-macronp)n(i))(-1) rather than (circumflexp(i)(1-circumflexp)n(i))(-1). Finally, we showed that a simple regression analysis is not appropriate unless tau(2) is much larger than sigma(2)(i), or a robust variance is used.
A neural signature of the unique hues
Forder, Lewis; Bosten, Jenny; He, Xun; Franklin, Anna
2017-01-01
Since at least the 17th century there has been the idea that there are four simple and perceptually pure “unique” hues: red, yellow, green, and blue, and that all other hues are perceived as mixtures of these four hues. However, sustained scientific investigation has not yet provided solid evidence for a neural representation that separates the unique hues from other colors. We measured event-related potentials elicited from unique hues and the ‘intermediate’ hues in between them. We find a neural signature of the unique hues 230 ms after stimulus onset at a post-perceptual stage of visual processing. Specifically, the posterior P2 component over the parieto-occipital lobe peaked significantly earlier for the unique than for the intermediate hues (Z = −2.9, p = 0.004). Having identified a neural marker for unique hues, fundamental questions about the contribution of neural hardwiring, language and environment to the unique hues can now be addressed. PMID:28186142
ERIC Educational Resources Information Center
Heene, Moritz; Hilbert, Sven; Draxler, Clemens; Ziegler, Matthias; Buhner, Markus
2011-01-01
Fit indices are widely used in order to test the model fit for structural equation models. In a highly influential study, Hu and Bentler (1999) showed that certain cutoff values for these indices could be derived, which, over time, has led to the reification of these suggested thresholds as "golden rules" for establishing the fit or other aspects…
1991-03-01
Adjusted Estimators for Variance 1Redutilol in Computer Simutlation by Riichiardl L. R’ r March, 1991 D~issertation Advisor: Peter A.W. Lewis Approved for...OF NONLINEAR CONTROLS AND REGRESSION-ADJUSTED ESTIMATORS FOR VARIANCE REDUCTION IN COMPUTER SIMULATION 12. Personal Author(s) Richard L. Ressler 13a...necessary and identify by block number) This dissertation develops new techniques for variance reduction in computer simulation. It demonstrates that
Additive and subtractive scrambling in optional randomized response modeling.
Hussain, Zawar; Al-Sobhi, Mashail M; Al-Zahrani, Bander
2014-01-01
This article considers unbiased estimation of mean, variance and sensitivity level of a sensitive variable via scrambled response modeling. In particular, we focus on estimation of the mean. The idea of using additive and subtractive scrambling has been suggested under a recent scrambled response model. Whether it is estimation of mean, variance or sensitivity level, the proposed scheme of estimation is shown relatively more efficient than that recent model. As far as the estimation of mean is concerned, the proposed estimators perform relatively better than the estimators based on recent additive scrambling models. Relative efficiency comparisons are also made in order to highlight the performance of proposed estimators under suggested scrambling technique.
Wave variance partitioning in the trough of a barred beach
NASA Astrophysics Data System (ADS)
Howd, Peter A.; Oltman-Shay, Joan; Holman, Robert A.
1991-07-01
The wave-induced velocity field in the nearshore is composed of contributions from incident wind waves (ƒ > 0.05 Hz), surface infragravity waves (ƒ < 0.05 Hz, |κ| < (σ2/gβ) and shear waves (ƒ < 0.05 Hz, |κ| > σ2/gβ), where ƒ is the frequency, σ = 2πƒ, κ is the radial alongshore wavenumber (2π/L, L being the alongshore wavelength), β is the beach slope, and g is the acceleration due to gravity. Using an alongshore array of current meters located in the trough of a nearshore bar (mean depth ≈ 1.5 m), we investigate the bulk statistical behaviors of these wave bands over a wide range of incident wave conditions. The behavior of each contributing wave type is parameterized in terms of commonly measured or easily predicted variables describing the beach profile, wind waves, and current field. Over the 10-day period, the mean contributions (to the total variance) of the incident, infragravity, and shear wave bands were 71.5%, 14.3% and 13.6% for the alongshore component of flow (mean rms oscillations of 44, 20, and 19 cm s-1, respectively), and 81.9%, 10.9%, and 6.6% for the cross-shore component (mean rms oscillations of 92, 32, and 25 cm s-1, respectively). However, the values varied considerably. The contribution to the alongshore (cross-shore) component of flow ranged from 44.8-88.4% (58.5-95.8%) for the incident band, to 6.2-26.6% (2.5-32.4%) for the infragravity band, and 3.4-33.1% (0.6-14.3%) for the shear wave band. Incident wave oscillations were limited by depth-dependent saturation over the adjacent bar crest and varied only with the tide. The infragravity wave rms oscillations on this barred beach are best parameterized by the offshore wave height, consistent with previous studies on planar beaches. Comparison with data from four other beaches of widely differing geometries shows the shoreline infragravity amplitude to be a near-constant ratio of the offshore wave height. The magnitude of the ratio is found to be dependent on the Iribarren
Spatially variant regularization of lateral displacement measurement using variance.
Sumi, Chikayoshi; Itoh, Toshiki
2009-05-01
The purpose of this work is to confirm the effectiveness of our proposed spatially variant displacement component-dependent regularization for our previously developed ultrasonic two-dimensional (2D) displacement vector measurement methods, i.e., 2D cross-spectrum phase gradient method (CSPGM), 2D autocorrelation method (AM), and 2D Doppler method (DM). Generally, the measurement accuracy of lateral displacement spatially varies and the accuracy is lower than that of axial displacement that is accurate enough. This inaccurate measurement causes an instability in a 2D shear modulus reconstruction. Thus, the spatially variant lateral displacement regularization using the lateral displacement variance will be effective in obtaining an accurate lateral strain measurement and a stable shear modulus reconstruction than a conventional spatially uniform regularization. The effectiveness is verified through agar phantom experiments. The agar phantom [60mm (height) x 100 mm (lateral width) x 40 mm (elevational width)] that has, at a depth of 10mm, a circular cylindrical inclusion (dia.=10mm) of a higher shear modulus (2.95 and 1.43 x 10(6)N/m(2), i.e., relative shear modulus, 2.06) is compressed in the axial direction from the upper surface of the phantom using a commercial linear array type transducer that has a nominal frequency of 7.5-MHz. Because a contrast-to-noise ratio (CNR) expresses the detectability of the inhomogeneous region in the lateral strain image and further has almost the same sense as that of signal-to-noise ratio (SNR) for strain measurement, the obtained results show that the proposed spatially variant lateral displacement regularization yields a more accurate lateral strain measurement as well as a higher detectability in the lateral strain image (e.g., CNRs and SNRs for 2D CSPGM, 2.36 vs 2.27 and 1.74 vs 1.71, respectively). Furthermore, the spatially variant lateral displacement regularization yields a more stable and more accurate 2D shear modulus
Numerical Inversion with Full Estimation of Variance-Covariance Matrix
NASA Astrophysics Data System (ADS)
Saltogianni, Vasso; Stiros, Stathis
2016-04-01
-point, stochastic optimal solutions are computed as the center of gravity of these sets. A full Variance-Covariance Matrix (VCM) of each solution can be directly computed as second statistical moment. The overall method and the software have been tested with synthetic data (accuracy-oriented approach) in the modeling of magma chambers in the Santorini volcano and the modeling of double-fault earthquakes, i.e. to inversion problems with up to 18 unknowns.
Outlier detection for particle image velocimetry data using a locally estimated noise variance
NASA Astrophysics Data System (ADS)
Lee, Yong; Yang, Hua; Yin, ZhouPing
2017-03-01
This work describes an adaptive spatial variable threshold outlier detection algorithm for raw gridded particle image velocimetry data using a locally estimated noise variance. This method is an iterative procedure, and each iteration is composed of a reference vector field reconstruction step and an outlier detection step. We construct the reference vector field using a weighted adaptive smoothing method (Garcia 2010 Comput. Stat. Data Anal. 54 1167-78), and the weights are determined in the outlier detection step using a modified outlier detector (Ma et al 2014 IEEE Trans. Image Process. 23 1706-21). A hard decision on the final weights of the iteration can produce outlier labels of the field. The technical contribution is that the spatial variable threshold motivation is embedded in the modified outlier detector with a locally estimated noise variance in an iterative framework for the first time. It turns out that a spatial variable threshold is preferable to a single spatial constant threshold in complicated flows such as vortex flows or turbulent flows. Synthetic cellular vortical flows with simulated scattered or clustered outliers are adopted to evaluate the performance of our proposed method in comparison with popular validation approaches. This method also turns out to be beneficial in a real PIV measurement of turbulent flow. The experimental results demonstrated that the proposed method yields the competitive performance in terms of outlier under-detection count and over-detection count. In addition, the outlier detection method is computational efficient and adaptive, requires no user-defined parameters, and corresponding implementations are also provided in supplementary materials.
Early fault detection in automotive ball bearings using the minimum variance cepstrum
NASA Astrophysics Data System (ADS)
Park, Choon-Su; Choi, Young-Chul; Kim, Yang-Hann
2013-07-01
Ball bearings in automotive wheels play an important role in a vehicle. They enable an automobile to run and simultaneously support the vehicle. Once faults are generated, even if they are small, they often grow fast even under normal driving condition and cause vibration and noise. Therefore, it is critical to detect faults as early as possible to prevent bearings from generating harsh noise and vibration. How early faults can be detected is associated with how well a detecting method finds the information of early faults from measured signal. Incipient faults are so small that the fault signal is inherently buried by noise. Minimum variance cepstrum (MVC) has been introduced for the observation of periodic impulse signal under noisy environments. We are particularly focusing on the definition of MVC that goes back to the original definition by Bogert et al. in comparison with the recently prevalent definition of cepstral analysis. In this work, the MVC is, therefore, obtained by liftering a logarithmic power spectrum, and the lifter bank is designed by the minimum variance algorithm. Furthermore, it is also shown how efficient the method is for detecting periodic fault signal made by early faults by using automotive ball bearings, with which an automobile is equipped under running conditions. We were able to detect incipient faults in 4 out of 12 normal bearings which passed acceptance test as well as in bearings that were recalled due to noise and vibration. In addition, we compared the results of the proposed method with results obtained using other older well-established early fault detection methods that were chosen from 4 groups of methods which were classified by the domain of observation. The results demonstrated that MVC determined bearing fault periods more clearly than other methods under the given condition.
View-angle-dependent AIRS Cloudiness and Radiance Variance: Analysis and Interpretation
NASA Technical Reports Server (NTRS)
Gong, Jie; Wu, Dong L.
2013-01-01
Upper tropospheric clouds play an important role in the global energy budget and hydrological cycle. Significant view-angle asymmetry has been observed in upper-level tropical clouds derived from eight years of Atmospheric Infrared Sounder (AIRS) 15 um radiances. Here, we find that the asymmetry also exists in the extra-tropics. It is larger during day than that during night, more prominent near elevated terrain, and closely associated with deep convection and wind shear. The cloud radiance variance, a proxy for cloud inhomogeneity, has consistent characteristics of the asymmetry to those in the AIRS cloudiness. The leading causes of the view-dependent cloudiness asymmetry are the local time difference and small-scale organized cloud structures. The local time difference (1-1.5 hr) of upper-level (UL) clouds between two AIRS outermost views can create parts of the observed asymmetry. On the other hand, small-scale tilted and banded structures of the UL clouds can induce about half of the observed view-angle dependent differences in the AIRS cloud radiances and their variances. This estimate is inferred from analogous study using Microwave Humidity Sounder (MHS) radiances observed during the period of time when there were simultaneous measurements at two different view-angles from NOAA-18 and -19 satellites. The existence of tilted cloud structures and asymmetric 15 um and 6.7 um cloud radiances implies that cloud statistics would be view-angle dependent, and should be taken into account in radiative transfer calculations, measurement uncertainty evaluations and cloud climatology investigations. In addition, the momentum forcing in the upper troposphere from tilted clouds is also likely asymmetric, which can affect atmospheric circulation anisotropically.
Phonological processing is uniquely associated with neuro-metabolic concentration.
Bruno, Jennifer Lynn; Lu, Zhong-Lin; Manis, Franklin R
2013-02-15
Reading is a complex process involving recruitment and coordination of a distributed network of brain regions. The present study sought to establish a methodologically sound evidentiary base relating specific reading and phonological skills to neuro-metabolic concentration. Single voxel proton magnetic resonance spectroscopy was performed to measure metabolite concentration in a left hemisphere region around the angular gyrus for 31 young adults with a range of reading and phonological abilities. Correlation data demonstrated a significant negative association between phonological decoding and normalized choline concentration and as well as a trend toward a significant negative association between sight word reading and normalized choline concentration, indicating that lower scores on these measures are associated with higher concentrations of choline. Regression analyses indicated that choline concentration accounted for a unique proportion of variance in the phonological decoding measure after accounting for age, cognitive ability and sight word reading skill. This pattern of results suggests some specificity for the negative relationship between choline concentration and phonological decoding. To our knowledge, this is the first study to provide evidence that choline concentration in the angular region may be related to phonological skills independently of other reading skills, general cognitive ability, and age. These results may have important implications for the study and treatment of reading disability, a disorder which has been related to deficits in phonological decoding and abnormalities in the angular gyrus.
Negative ulnar variance is not a risk factor for Kienböck's disease.
D'Hoore, K; De Smet, L; Verellen, K; Vral, J; Fabry, G
1994-03-01
Ulnar variance was measured in standardized conditions in 125 normal wrists and in 52 patients with Kienböck's disease. No significant difference in ulnar variance between a sex/age-matched control group and a group of patients affected with Kienböck's disease was found. A positive correlation was found between age and ulnar variance. No significant difference was found between men and women. Based on these results, negative ulnar variance does not seem to be an important factor in the etiology of Kienböck's disease.
Allan, David W; Levine, Judah
2016-04-01
Over the past 50 years, variances have been developed for characterizing the instabilities of precision clocks and oscillators. These instabilities are often modeled as nonstationary processes, and the variances have been shown to be well-behaved and to be unbiased, efficient descriptors of these types of processes. This paper presents a historical overview of the development of these variances. The time-domain and frequency-domain formulations are presented and their development is described. The strengths and weaknesses of these characterization metrics are discussed. These variances are also shown to be useful in other applications, such as in telecommunication.
42 CFR 456.524 - Notification of Administrator's action and duration of variance.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan:...
42 CFR 456.524 - Notification of Administrator's action and duration of variance.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan:...
Naragon-Gainey, Kristin; Gallagher, Matthew W.; Brown, Timothy A.
2013-01-01
A large body of research has found robust associations between dimensions of temperament (e.g., neuroticism, extraversion) and the mood and anxiety disorders. However, mood-state distortion (i.e., the tendency for current mood state to bias ratings of temperament) likely confounds these associations, rendering their interpretation and validity unclear. This issue is of particular relevance to clinical populations who experience elevated levels of general distress. The current study used the “trait-state-occasion” latent variable model (Cole, Martin, & Steiger, 2005) to separate the stable components of temperament from transient, situational influences such as current mood state. We examined the predictive power of the time-invariant components of temperament on the course of depression and social phobia in a large, treatment-seeking sample with mood and/or anxiety disorders (N = 826). Participants were assessed three times over the course of one year, using interview and self-report measures; most participants received treatment during this time. Results indicated that both neuroticism/behavioral inhibition (N/BI) and behavioral activation/positive affect (BA/P) consisted largely of stable, time-invariant variance (57% to 78% of total variance). Furthermore, the time-invariant components of N/BI and BA/P were uniquely and incrementally predictive of change in depression and social phobia, adjusting for initial symptom levels. These results suggest that the removal of state variance bolsters the effect of temperament on psychopathology among clinically distressed individuals. Implications for temperament-psychopathology models, psychopathology assessment, and the stability of traits are discussed. PMID:24016004
Gallais, A
1992-01-01
For autotetraploid species the development of the concept of test value (value in testcross) leads to a simple description of the variance among testcross progenies. When defining directly genetic effects at the level of the value of the progenies, there is no contribution of triand tetragenic interactions. To estimate additive and dominance variances it is only necessary to have the population of progenies structured in half-sib or full-sib families; it is then possible to determine the presence of epistasis using a two-way mating design. When the theory of recurrent selection is applied dominance variance can be neglected for the prediction of genetic advance in one cycle as well for the development of combined selection when progenies are structured in families. The results are similar to those for diploids with two-locus epistasis. The more efficient scheme consists of the development of pair-crossing in off-season generations (for intercrossing) and simultaneous crossing of each plant to the tester. In comparison to the classical scheme, the relative efficiency of such a scheme is 41%. The use of combined selection will further increase this superiority.
Kerala: a unique model of development.
Kannan, K P; Thankappan, K R; Ramankutty, V; Aravindan, K P
1991-12-01
This article capsules health in terms of morbidity, mortality, and maternal and child health; sex ratios, and population density in Kerala state in India from a more expanded report. Kerala state is known for its highly literate and female literate, and poor income population, but its well advanced state of demographic transition. There is a declining population growth rate, a high average marriage age, a low fertility rate, and a high degree of population mobility. One of the unique features of Kerala is the high female literacy, and the favorable position of women in decision making and a matrilineal inheritance mode. The rights of the poor and underprivileged have been upheld. The largest part of government revenue is spent on education followed by health. Traditional healing systems such the ayurveda are strong in Kerala, and Christian missionaries have contributed to a caring tradition. Morbidity is high and mortality is low because medical interventions have affected morality only. The reduction of poverty and environmentally related diseases has not been accomplished inspite of land reform, mass schooling, and general egalitarian policies. Mortality declines and a decline in birth rates have lead to a more adult and aged population, which increases the prevalence of chronic degenerative diseases. Historically, the death rate in Kerala was always lower (25/1000 in 1930 and 6.4 in 1986). The gains in mortality were made in reducing infant mortality (27/1000), which is 4 times less than India as a whole and comparable to Korea, Panama, Yugoslavia, Sri Lanka, and Colombia. Lower female mortality occurs in the 0-4 years. Life expectancy which was the same as India's in 1930 is currently 12 years higher than India's. Females have a higher expectation of life. The sex ratio in 1981 was 1032 compared to India's of 935. Kerala had almost replacement level in 1985. The crude birth rate is 21 versus 32 for India. In addition to the decline in death rates of those 5
NASA Astrophysics Data System (ADS)
Chernikova, Dina; Axell, Kåre; Avdic, Senada; Pázsit, Imre; Nordlund, Anders; Allard, Stefan
2015-05-01
Two versions of the neutron-gamma variance to mean (Feynman-alpha method or Feynman-Y function) formula for either gamma detection only or total neutron-gamma detection, respectively, are derived and compared in this paper. The new formulas have particular importance for detectors of either gamma photons or detectors sensitive to both neutron and gamma radiation. If applied to a plastic or liquid scintillation detector, the total neutron-gamma detection Feynman-Y expression corresponds to a situation where no discrimination is made between neutrons and gamma particles. The gamma variance to mean formulas are useful when a detector of only gamma radiation is used or when working with a combined neutron-gamma detector at high count rates. The theoretical derivation is based on the Chapman-Kolmogorov equation with the inclusion of general reactions and corresponding intensities for neutrons and gammas, but with the inclusion of prompt reactions only. A one energy group approximation is considered. The comparison of the two different theories is made by using reaction intensities obtained in MCNPX simulations with a simplified geometry for two scintillation detectors and a 252Cf-source. In addition, the variance to mean ratios, neutron, gamma and total neutron-gamma are evaluated experimentally for a weak 252Cf neutron-gamma source, a 137Cs random gamma source and a 22Na correlated gamma source. Due to the focus being on the possibility of using neutron-gamma variance to mean theories for both reactor and safeguards applications, we limited the present study to the general analytical expressions for Feynman-alpha formulas.
Unique sugar metabolic pathways of bifidobacteria.
Fushinobu, Shinya
2010-01-01
Bifidobacteria have many beneficial effects for human health. The gastrointestinal tract, where natural colonization of bifidobacteria occurs, is an environment poor in nutrition and oxygen. Therefore, bifidobacteria have many unique glycosidases, transporters, and metabolic enzymes for sugar fermentation to utilize diverse carbohydrates that are not absorbed by host humans and animals. They have a unique, effective central fermentative pathway called bifid shunt. Recently, a novel metabolic pathway that utilizes both human milk oligosaccharides and host glycoconjugates was found. The galacto-N-biose/lacto-N-biose I metabolic pathway plays a key role in colonization in the infant gastrointestinal tract. These pathways involve many unique enzymes and proteins. This review focuses on their molecular mechanisms, as revealed by biochemical and crystallographic studies.
Waste Isolation Pilot Plant No-Migration Variance Petition. Revision 1, Volume 1
Hunt, Arlen
1990-03-01
The purpose of the WIPP No-Migration Variance Petition is to demonstrate, according to the requirements of RCRA {section}3004(d) and 40 CFR {section}268.6, that to a reasonable degree of certainty, there will be no migration of hazardous constituents from the facility for as long as the wastes remain hazardous. The DOE submitted the petition to the EPA in March 1989. Upon completion of its initial review, the EPA provided to DOE a Notice of Deficiencies (NOD). DOE responded to the EPA`s NOD and met with the EPA`s reviewers of the petition several times during 1989. In August 1989, EPA requested that DOE submit significant additional information addressing a variety of topics including: waste characterization, ground water hydrology, geology and dissolution features, monitoring programs, the gas generation test program, and other aspects of the project. This additional information was provided to EPA in January 1990 when DOE submitted Revision 1 of the Addendum to the petition. For clarity and ease of review, this document includes all of these submittals, and the information has been updated where appropriate. This document is divided into the following sections: Introduction, 1.0: Facility Description, 2.0: Waste Description, 3.0; Site Characterization, 4.0; Environmental Impact Analysis, 5.0; Prediction and Assessment of Infrequent Events, 6.0; and References, 7.0.
Lancastle, Deborah; Boivin, Jacky
2005-03-01
The aim of this study was to examine the unique and shared predictive power of psychological variables on reproductive physical health. Three months before fertility treatment, 97 women completed measures of dispositional optimism, trait anxiety, and coping. Information about biological response to treatment (e.g., estradiol level) was collected from medical charts after treatment. Structural equation modeling showed that measured psychological variables were all significant indicators of a single latent construct and that this construct was a better predictor of biological response to treatment than was any individual predictor. This research contributes to evidence suggesting that the health benefits of dispositional optimism are due to its shared variance with neuroticism.
Unique forbidden beta decays and neutrino mass
Dvornický, Rastislav; Šimkovic, Fedor
2015-10-28
The measurement of the electron energy spectrum in single β decays close to the endpoint provides a direct determination of the neutrino masses. The most sensitive experiments use β decays with low Q value, e.g. KATRIN (tritium) and MARE (rhenium). We present the theoretical spectral shape of electrons emitted in the first, second, and fourth unique forbidden β decays. Our findings show that the Kurie functions for these unique forbidden β transitions are linear in the limit of massless neutrinos like the Kurie function of the allowed β decay of tritium.
Transcriptomics exposes the uniqueness of parasitic plants.
Ichihashi, Yasunori; Mutuku, J Musembi; Yoshida, Satoko; Shirasu, Ken
2015-07-01
Parasitic plants have the ability to obtain nutrients directly from other plants, and several species are serious biological threats to agriculture by parasitizing crops of high economic importance. The uniqueness of parasitic plants is characterized by the presence of a multicellular organ called a haustorium, which facilitates plant-plant interactions, and shutting down or reducing their own photosynthesis. Current technical advances in next-generation sequencing and bioinformatics have allowed us to dissect the molecular mechanisms behind the uniqueness of parasitic plants at the genome-wide level. In this review, we summarize recent key findings mainly in transcriptomics that will give us insights into the future direction of parasitic plant research.
On uniqueness for frictional contact rate problems
NASA Astrophysics Data System (ADS)
Radi, E.; Bigoni, D.; Tralli, A.
1999-02-01
A linear elastic solid having part of the boundary in unilateral frictional contact witha stiffer constraint is considered. Bifurcations of the quasistatic velocity problem are analyzed,making use of methods developed for elastoplasticity. An exclusion principle for bifurcation isproposed which is similar, in essence, to the well-known exclusion principle given by Hill, 1958. Sufficient conditions for uniqueness are given for a broad class of contactconstitutive equations. The uniqueness criteria are based on the introduction of linear comparisoninterfaces defined both where the contact rate constitutive equation are piece-wise incrementallylinear and where these are thoroughly nonlinear. Structural examples are proposed which giveevidence to the applicability of the exclusion criteria.
Bright, Molly G; Murphy, Kevin
2015-07-01
Noise correction is a critical step towards accurate mapping of resting state BOLD fMRI connectivity. Noise sources related to head motion or physiology are typically modelled by nuisance regressors, and a generalised linear model is applied to regress out the associated signal variance. In this study, we use independent component analysis (ICA) to characterise the data variance typically discarded in this pre-processing stage in a cohort of 12 healthy volunteers. The signal variance removed by 24, 12, 6, or only 3 head motion parameters demonstrated network structure typically associated with functional connectivity, and certain networks were discernable in the variance extracted by as few as 2 physiologic regressors. Simulated nuisance regressors, unrelated to the true data noise, also removed variance with network structure, indicating that any group of regressors that randomly sample variance may remove highly structured "signal" as well as "noise." Furthermore, to support this we demonstrate that random sampling of the original data variance continues to exhibit robust network structure, even when as few as 10% of the original volumes are considered. Finally, we examine the diminishing returns of increasing the number of nuisance regressors used in pre-processing, showing that excessive use of motion regressors may do little better than chance in removing variance within a functional network. It remains an open challenge to understand the balance between the benefits and confounds of noise correction using nuisance regressors.
ERIC Educational Resources Information Center
Abry, Tashia; Cash, Anne H.; Bradshaw, Catherine P.
2014-01-01
Generalizability theory (GT) offers a useful framework for estimating the reliability of a measure while accounting for multiple sources of error variance. The purpose of this study was to use GT to examine multiple sources of variance in and the reliability of school-level teacher and high school student behaviors as observed using the tool,…
29 CFR 1905.10 - Variances and other relief under section 6(b)(6)(A).
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 5 2010-07-01 2010-07-01 false Variances and other relief under section 6(b)(6)(A). 1905.10 Section 1905.10 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RULES OF PRACTICE FOR VARIANCES, LIMITATIONS, VARIATIONS, TOLERANCES, AND EXEMPTIONS UNDER THE...
ERIC Educational Resources Information Center
Penfield, Randall D.; Algina, James
2006-01-01
One approach to measuring unsigned differential test functioning is to estimate the variance of the differential item functioning (DIF) effect across the items of the test. This article proposes two estimators of the DIF effect variance for tests containing dichotomous and polytomous items. The proposed estimators are direct extensions of the…
Consistent Small-Sample Variances for Six Gamma-Family Measures of Ordinal Association
ERIC Educational Resources Information Center
Woods, Carol M.
2009-01-01
Gamma-family measures are bivariate ordinal correlation measures that form a family because they all reduce to Goodman and Kruskal's gamma in the absence of ties (1954). For several gamma-family indices, more than one variance estimator has been introduced. In previous research, the "consistent" variance estimator described by Cliff and…
29 CFR 1926.2 - Variances from safety and health standards.
Code of Federal Regulations, 2010 CFR
2010-07-01
...)(A) or 6(d) of the Williams-Steiger Occupational Safety and Health Act of 1970 (29 U.S.C. 65). The... for variances under the Williams-Steiger Occupational Safety and Health Act of 1970, and any requests for variances under Williams-Steiger Occupational Safety and Health Act with respect to...