On the Relations among Regular, Equal Unique Variances, and Image Factor Analysis Models.
ERIC Educational Resources Information Center
Hayashi, Kentaro; Bentler, Peter M.
2000-01-01
Investigated the conditions under which the matrix of factor loadings from the factor analysis model with equal unique variances will give a good approximation to the matrix of factor loadings from the regular factor analysis model. Extends the results to the image factor analysis model. Discusses implications for practice. (SLD)
Development of Statistically Parallel Tests by Analysis of Unique Item Variance.
ERIC Educational Resources Information Center
Ree, Malcolm James
A method for developing statistically parallel tests based on the analysis of unique item variance was developed. A test population of 907 basic airmen trainees were required to estimate the angle at which an object in a photograph was viewed, selecting from eight possibilities. A FORTRAN program known as VARSEL was used to rank all the test items…
Adding a Parameter Increases the Variance of an Estimated Regression Function
ERIC Educational Resources Information Center
Withers, Christopher S.; Nadarajah, Saralees
2011-01-01
The linear regression model is one of the most popular models in statistics. It is also one of the simplest models in statistics. It has received applications in almost every area of science, engineering and medicine. In this article, the authors show that adding a predictor to a linear model increases the variance of the estimated regression…
Finger gnosis predicts a unique but small part of variance in initial arithmetic performance.
Wasner, Mirjam; Nuerk, Hans-Christoph; Martignon, Laura; Roesch, Stephanie; Moeller, Korbinian
2016-06-01
Recent studies indicated that finger gnosis (i.e., the ability to perceive and differentiate one's own fingers) is associated reliably with basic numerical competencies. In this study, we aimed at examining whether finger gnosis is also a unique predictor for initial arithmetic competencies at the beginning of first grade-and thus before formal math instruction starts. Therefore, we controlled for influences of domain-specific numerical precursor competencies, domain-general cognitive ability, and natural variables such as gender and age. Results from 321 German first-graders revealed that finger gnosis indeed predicted a unique and relevant but nevertheless only small part of the variance in initial arithmetic performance (∼1%-2%) as compared with influences of general cognitive ability and numerical precursor competencies. Taken together, these results substantiated the notion of a unique association between finger gnosis and arithmetic and further corroborate the theoretical idea of finger-based representations contributing to numerical cognition. However, the only small part of variance explained by finger gnosis seems to limit its relevance for diagnostic purposes. PMID:26895483
Schindler, Suzanne Elizabeth; Fagan, Anne M.
2015-01-01
Our understanding of the pathogenesis of Alzheimer disease (AD) has been greatly influenced by investigation of rare families with autosomal dominant mutations that cause early onset AD. Mutations in the genes coding for amyloid precursor protein (APP), presenilin 1 (PSEN-1), and presenilin 2 (PSEN-2) cause over-production of the amyloid-β peptide (Aβ) leading to early deposition of Aβ in the brain, which in turn is hypothesized to initiate a cascade of processes, resulting in neuronal death, cognitive decline, and eventual dementia. Studies of cerebrospinal fluid (CSF) from individuals with the common form of AD, late-onset AD (LOAD), have revealed that low CSF Aβ42 and high CSF tau are associated with AD brain pathology. Herein, we review the literature on CSF biomarkers in autosomal dominant AD (ADAD), which has contributed to a detailed road map of AD pathogenesis, especially during the preclinical period, prior to the appearance of any cognitive symptoms. Current drug trials are also taking advantage of the unique characteristics of ADAD and utilizing CSF biomarkers to accelerate development of effective therapies for AD. PMID:26175713
MCNP variance reduction overview
Hendricks, J.S.; Booth, T.E.
1985-01-01
The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code.
Estimation of Variance Components Using Computer Packages.
ERIC Educational Resources Information Center
Chastain, Robert L.; Willson, Victor L.
Generalizability theory is based upon analysis of variance (ANOVA) and requires estimation of variance components for the ANOVA design under consideration in order to compute either G (Generalizability) or D (Decision) coefficients. Estimation of variance components has a number of alternative methods available using SAS, BMDP, and ad hoc…
Wilkinson, Eduan; Holzmayer, Vera; Jacobs, Graeme B.; de Oliveira, Tulio; Brennan, Catherine A.; Hackett, John; van Rensburg, Estrelita Janse
2015-01-01
Abstract By the end of 2012, more than 6.1 million people were infected with HIV-1 in South Africa. Subtype C was responsible for the majority of these infections and more than 300 near full-length genomes (NFLGs) have been published. Currently very few non-subtype C isolates have been identified and characterized within the country, particularly full genome non-C isolates. Seven patients from the Tygerberg Virology (TV) cohort were previously identified as possible non-C subtypes and were selected for further analyses. RNA was isolated from five individuals (TV047, TV096, TV101, TV218, and TV546) and DNA from TV016 and TV1057. The NFLGs of these samples were amplified in overlapping fragments and sequenced. Online subtyping tools REGA version 3 and jpHMM were used to screen for subtypes and recombinants. Maximum likelihood (ML) phylogenetic analysis (phyML) was used to infer subtypes and SimPlot was used to confirm possible intersubtype recombinants. We identified three subtype B (TV016, TV047, and TV1057) isolates, one subtype A1 (TV096), one subtype G (TV546), one unique AD (TV101), and one unique AC (TV218) recombinant form. This is the first NFLG of subtype G that has been described in South Africa. The subtype B sequences described also increased the NFLG subtype B sequences in Africa from three to six. There is a need for more NFLG sequences, as partial HIV-1 sequences may underrepresent viral recombinant forms. It is also necessary to continue monitoring the evolution and spread of HIV-1 in South Africa, because understanding viral diversity may play an important role in HIV-1 prevention strategies. PMID:25492033
Wilkinson, Eduan; Holzmayer, Vera; Jacobs, Graeme B; de Oliveira, Tulio; Brennan, Catherine A; Hackett, John; van Rensburg, Estrelita Janse; Engelbrecht, Susan
2015-04-01
By the end of 2012, more than 6.1 million people were infected with HIV-1 in South Africa. Subtype C was responsible for the majority of these infections and more than 300 near full-length genomes (NFLGs) have been published. Currently very few non-subtype C isolates have been identified and characterized within the country, particularly full genome non-C isolates. Seven patients from the Tygerberg Virology (TV) cohort were previously identified as possible non-C subtypes and were selected for further analyses. RNA was isolated from five individuals (TV047, TV096, TV101, TV218, and TV546) and DNA from TV016 and TV1057. The NFLGs of these samples were amplified in overlapping fragments and sequenced. Online subtyping tools REGA version 3 and jpHMM were used to screen for subtypes and recombinants. Maximum likelihood (ML) phylogenetic analysis (phyML) was used to infer subtypes and SimPlot was used to confirm possible intersubtype recombinants. We identified three subtype B (TV016, TV047, and TV1057) isolates, one subtype A1 (TV096), one subtype G (TV546), one unique AD (TV101), and one unique AC (TV218) recombinant form. This is the first NFLG of subtype G that has been described in South Africa. The subtype B sequences described also increased the NFLG subtype B sequences in Africa from three to six. There is a need for more NFLG sequences, as partial HIV-1 sequences may underrepresent viral recombinant forms. It is also necessary to continue monitoring the evolution and spread of HIV-1 in South Africa, because understanding viral diversity may play an important role in HIV-1 prevention strategies.
NASA Astrophysics Data System (ADS)
Good, Ron; Fletcher, Harold J.
The importance of reporting explained variance (sometimes referred to as magnitude of effects) in ANOVA designs is discussed in this paper. Explained variance is an estimate of the strength of the relationship between treatment (or other factors such as sex, grade level, etc.) and dependent variables of interest to the researcher(s). Three methods that can be used to obtain estimates of explained variance in ANOVA designs are described and applied to 16 studies that were reported in recent volumes of this journal. The results show that, while in most studies the treatment accounts for a relatively small proportion of the variance in dependent variable scores., in., some studies the magnitude of the treatment effect is respectable. The authors recommend that researchers in science education report explained variance in addition to the commonly reported tests of significance, since the latter are inadequate as the sole basis for making decisions about the practical importance of factors of interest to science education researchers.
Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A. E-mail: rix@mpia.de E-mail: janewman@pitt.edu
2011-04-20
Deep pencil beam surveys (<1 deg{sup 2}) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size {Delta}z. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , {Delta}z, and stellar mass m{sub *}. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates ({delta}{sigma}{sub v}/{sigma}{sub v}) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with {Delta}z = 0.5, the relative cosmic variance of galaxies with m{sub *}>10{sup 11} M{sub sun} is {approx}38%, while it is {approx}27% for GEMS and {approx}12% for COSMOS. For galaxies of m{sub *} {approx} 10{sup 10} M{sub sun}, the relative cosmic variance is {approx}19% for GOODS, {approx}13% for GEMS, and {approx}6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z
NASA Astrophysics Data System (ADS)
Moster, Benjamin P.; Somerville, Rachel S.; Newman, Jeffrey A.; Rix, Hans-Walter
2011-04-01
Deep pencil beam surveys (<1 deg2) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by "cosmic variance." This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift \\bar{z} and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, \\bar{z}, Δz, and stellar mass m *. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at \\bar{z}=2 and with Δz = 0.5, the relative cosmic variance of galaxies with m *>1011 M sun is ~38%, while it is ~27% for GEMS and ~12% for COSMOS. For galaxies of m * ~ 1010 M sun, the relative cosmic variance is ~19% for GOODS, ~13% for GEMS, and ~6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at \\bar{z}=2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic variance is
NASA Astrophysics Data System (ADS)
Chabuda, Krzysztof; Leroux, Ian D.; Demkowicz-Dobrzański, Rafał
2016-08-01
The instability of an atomic clock is characterized by the Allan variance, a measure widely used to describe the noise of frequency standards. We provide an explicit method to find the ultimate bound on the Allan variance of an atomic clock in the most general scenario where N atoms are prepared in an arbitrarily entangled state and arbitrary measurement and feedback are allowed, including those exploiting coherences between succeeding interrogation steps. While the method is rigorous and general, it becomes numerically challenging for large N and long averaging times.
Videotape Project in Child Variance. Final Report.
ERIC Educational Resources Information Center
Morse, William C.; Smith, Judith M.
The design, production, dissemination, and evaluation of a series of videotaped training packages designed to enable teachers, parents, and paraprofessionals to interpret child variance in light of personal and alternative perspectives of behavior are discussed. The goal of each package is to highlight unique contributions of different theoretical…
Variance Anisotropy in Kinetic Plasmas
NASA Astrophysics Data System (ADS)
Parashar, Tulasi N.; Oughton, Sean; Matthaeus, William H.; Wan, Minping
2016-06-01
Solar wind fluctuations admit well-documented anisotropies of the variance matrix, or polarization, related to the mean magnetic field direction. Typically, one finds a ratio of perpendicular variance to parallel variance of the order of 9:1 for the magnetic field. Here we study the question of whether a kinetic plasma spontaneously generates and sustains parallel variances when initiated with only perpendicular variance. We find that parallel variance grows and saturates at about 5% of the perpendicular variance in a few nonlinear times irrespective of the Reynolds number. For sufficiently large systems (Reynolds numbers) the variance approaches values consistent with the solar wind observations.
Conversations across Meaning Variance
ERIC Educational Resources Information Center
Cordero, Alberto
2013-01-01
Progressive interpretations of scientific theories have long been denounced as naive, because of the inescapability of meaning variance. The charge reportedly applies to recent realist moves that focus on theory-parts rather than whole theories. This paper considers the question of what "theory-parts" of epistemic significance (if any) relevantly…
ERIC Educational Resources Information Center
Braun, W. John
2012-01-01
The Analysis of Variance is often taught in introductory statistics courses, but it is not clear that students really understand the method. This is because the derivation of the test statistic and p-value requires a relatively sophisticated mathematical background which may not be well-remembered or understood. Thus, the essential concept behind…
Minimum variance geographic sampling
NASA Technical Reports Server (NTRS)
Terrell, G. R. (Principal Investigator)
1980-01-01
Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.
Nuclear Material Variance Calculation
1995-01-01
MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet that significantly reduces the effort required to make the variance and covariance calculations needed to determine the detection sensitivity of a materials accounting system and loss of special nuclear material (SNM). The user is required to enter information into one of four data tables depending on the type of term in the materials balance (MB) equation. The four data tables correspond to input transfers, output transfers,more » and two types of inventory terms, one for nondestructive assay (NDA) measurements and one for measurements made by chemical analysis. Each data entry must contain an identification number and a short description, as well as values for the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements during an accounting period. The user must also specify the type of error model (additive or multiplicative) associated with each measurement, and possible correlations between transfer terms. Predefined spreadsheet macros are used to perform the variance and covariance calculations for each term based on the corresponding set of entries. MAVARIC has been used for sensitivity studies of chemical separation facilities, fuel processing and fabrication facilities, and gas centrifuge and laser isotope enrichment facilities.« less
Nominal analysis of "variance".
Weiss, David J
2009-08-01
Nominal responses are the natural way for people to report actions or opinions. Because nominal responses do not generate numerical data, they have been underutilized in behavioral research. On those occasions in which nominal responses are elicited, the responses are customarily aggregated over people or trials so that large-sample statistics can be employed. A new analysis is proposed that directly associates differences among responses with particular sources in factorial designs. A pair of nominal responses either matches or does not; when responses do not match, they vary. That analogue to variance is incorporated in the nominal analysis of "variance" (NANOVA) procedure, wherein the proportions of matches associated with sources play the same role as do sums of squares in an ANOVA. The NANOVA table is structured like an ANOVA table. The significance levels of the N ratios formed by comparing proportions are determined by resampling. Fictitious behavioral examples featuring independent groups and repeated measures designs are presented. A Windows program for the analysis is available.
Systems Engineering Programmatic Estimation Using Technology Variance
NASA Technical Reports Server (NTRS)
Mog, Robert A.
2000-01-01
Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed oil the subsystems and components comprising the system of interest. Technological "returns" and "variation" parameters, are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.
Systems Engineering Programmatic Estimation Using Technology Variance
NASA Technical Reports Server (NTRS)
Mog, Robert A.
2000-01-01
Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed on the subsystems and components comprising the system of interest. Technological "return" and "variation" parameters are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.
A proxy for variance in dense matching over homogeneous terrain
NASA Astrophysics Data System (ADS)
Altena, Bas; Cockx, Liesbet; Goedemé, Toon
2014-05-01
Automation in photogrammetry and avionics have brought highly autonomous UAV mapping solutions on the market. These systems have great potential for geophysical research, due to their mobility and simplicity of work. Flight planning can be done on site and orientation parameters are estimated automatically. However, one major drawback is still present: if contrast is lacking, stereoscopy fails. Consequently, topographic information cannot be obtained precisely through photogrammetry for areas with low contrast. Even though more robustness is added in the estimation through multi-view geometry, a precise product is still lacking. For the greater part, interpolation is applied over these regions, where the estimation is constrained by uniqueness, its epipolar line and smoothness. Consequently, digital surface models are generated with an estimate of the topography, without holes but also without an indication of its variance. Every dense matching algorithm is based on a similarity measure. Our methodology uses this property to support the idea that if only noise is present, no correspondence can be detected. Therefore, the noise level is estimated in respect to the intensity signal of the topography (SNR) and this ratio serves as a quality indicator for the automatically generated product. To demonstrate this variance indicator, two different case studies were elaborated. The first study is situated at an open sand mine near the village of Kiezegem, Belgium. Two different UAV systems flew over the site. One system had automatic intensity regulation, and resulted in low contrast over the sandy interior of the mine. That dataset was used to identify the weak estimations of the topography and was compared with the data from the other UAV flight. In the second study a flight campaign with the X100 system was conducted along the coast near Wenduine, Belgium. The obtained images were processed through structure-from-motion software. Although the beach had a very low
Cosmology without cosmic variance
Bernstein, Gary M.; Cai, Yan -Chuan
2011-10-01
The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing themore » number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.« less
Cosmology without cosmic variance
Bernstein, Gary M.; Cai, Yan -Chuan
2011-10-01
The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing the number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.
Budget variance analysis using RVUs.
Berlin, M F; Budzynski, M R
1998-01-01
This article details the use of the variance analysis as management tool to evaluate the financial health of the practice. A common financial tool for administrators has been a simple calculation measuring the difference between actual financials vs. budget financials. Standard cost accounting provides a methodology known as variance analysis to better understand the actual vs. budgeted financial streams. The standard variance analysis has been modified by applying relative value units (RVUs) as standards for the practice. PMID:10387247
Understanding gender variance in children and adolescents.
Simons, Lisa K; Leibowitz, Scott F; Hidalgo, Marco A
2014-06-01
Gender variance is an umbrella term used to describe gender identity, expression, or behavior that falls outside of culturally defined norms associated with a specific gender. In recent years, growing media coverage has heightened public awareness about gender variance in childhood and adolescence, and an increasing number of referrals to clinics specializing in care for gender-variant youth have been reported in the United States. Gender-variant expression, behavior, and identity may present in childhood and adolescence in a number of ways, and youth with gender variance have unique health needs. For those experiencing gender dysphoria, or distress encountered by the discordance between biological sex and gender identity, puberty is often an exceptionally challenging time. Pediatric primary care providers may be families' first resource for education and support, and they play a critical role in supporting the health of youth with gender variance by screening for psychosocial problems and health risks, referring for gender-specific mental health and medical care, and providing ongoing advocacy and support. PMID:24972420
Understanding gender variance in children and adolescents.
Simons, Lisa K; Leibowitz, Scott F; Hidalgo, Marco A
2014-06-01
Gender variance is an umbrella term used to describe gender identity, expression, or behavior that falls outside of culturally defined norms associated with a specific gender. In recent years, growing media coverage has heightened public awareness about gender variance in childhood and adolescence, and an increasing number of referrals to clinics specializing in care for gender-variant youth have been reported in the United States. Gender-variant expression, behavior, and identity may present in childhood and adolescence in a number of ways, and youth with gender variance have unique health needs. For those experiencing gender dysphoria, or distress encountered by the discordance between biological sex and gender identity, puberty is often an exceptionally challenging time. Pediatric primary care providers may be families' first resource for education and support, and they play a critical role in supporting the health of youth with gender variance by screening for psychosocial problems and health risks, referring for gender-specific mental health and medical care, and providing ongoing advocacy and support.
Variant evolutionary trees under phenotypic variance.
Nishimura, Kinya; Isoda, Yutaka
2004-01-01
Evolutionary branching, which is a coevolutionary phenomenon of the development of two or more distinctive traits from a single trait in a population, is the issue of recent studies on adaptive dynamics. In previous studies, it was revealed that trait variance is a minimum requirement for evolutionary branching, and that it does not play an important role in the formation of an evolutionary pattern of branching. Here we demonstrate that the trait evolution exhibits various evolutionary branching paths starting from an identical initial trait to different evolutional terminus traits as determined by only changing the assumption of trait variance. The key feature of this phenomenon is the topological configuration of equilibria and the initial point in the manifold of dimorphism from which dimorphic branches develop. This suggests that the existing monomorphic or polymorphic set in a population is not an unique inevitable consequence of an identical initial phenotype.
Sampling Errors of Variance Components.
ERIC Educational Resources Information Center
Sanders, Piet F.
A study on sampling errors of variance components was conducted within the framework of generalizability theory by P. L. Smith (1978). The study used an intuitive approach for solving the problem of how to allocate the number of conditions to different facets in order to produce the most stable estimate of the universe score variance. Optimization…
NASA Astrophysics Data System (ADS)
Anabalón, Andrés; Astefanesei, Dumitru; Choque, David
2016-11-01
We construct exact hairy AdS soliton solutions in Einstein-dilaton gravity theory. We examine their thermodynamic properties and discuss the role of these solutions for the existence of first order phase transitions for hairy black holes. The negative energy density associated to hairy AdS solitons can be interpreted as the Casimir energy that is generated in the dual filed theory when the fermions are antiperiodic on the compact coordinate.
The Variance Reaction Time Model
ERIC Educational Resources Information Center
Sikstrom, Sverker
2004-01-01
The variance reaction time model (VRTM) is proposed to account for various recognition data on reaction time, the mirror effect, receiver-operating-characteristic (ROC) curves, etc. The model is based on simple and plausible assumptions within a neural network: VRTM is a two layer neural network where one layer represents items and one layer…
Analysis of Variance: Variably Complex
ERIC Educational Resources Information Center
Drummond, Gordon B.; Vowler, Sarah L.
2012-01-01
These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution of…
Assessment of the genetic variance of late-onset Alzheimer's disease.
Ridge, Perry G; Hoyt, Kaitlyn B; Boehme, Kevin; Mukherjee, Shubhabrata; Crane, Paul K; Haines, Jonathan L; Mayeux, Richard; Farrer, Lindsay A; Pericak-Vance, Margaret A; Schellenberg, Gerard D; Kauwe, John S K
2016-05-01
Alzheimer's disease (AD) is a complex genetic disorder with no effective treatments. More than 20 common markers have been identified, which are associated with AD. Recently, several rare variants have been identified in Amyloid Precursor Protein (APP), Triggering Receptor Expressed On Myeloid Cells 2 (TREM2) and Unc-5 Netrin Receptor C (UNC5C) that affect risk for AD. Despite the many successes, the genetic architecture of AD remains unsolved. We used Genome-wide Complex Trait Analysis to (1) estimate phenotypic variance explained by genetics; (2) calculate genetic variance explained by known AD single nucleotide polymorphisms (SNPs); and (3) identify the genomic locations of variation that explain the remaining unexplained genetic variance. In total, 53.24% of phenotypic variance is explained by genetics, but known AD SNPs only explain 30.62% of the genetic variance. Of the unexplained genetic variance, approximately 41% is explained by unknown SNPs in regions adjacent to known AD SNPs, and the remaining unexplained genetic variance outside these regions.
Assessment of the genetic variance of late-onset Alzheimer's disease.
Ridge, Perry G; Hoyt, Kaitlyn B; Boehme, Kevin; Mukherjee, Shubhabrata; Crane, Paul K; Haines, Jonathan L; Mayeux, Richard; Farrer, Lindsay A; Pericak-Vance, Margaret A; Schellenberg, Gerard D; Kauwe, John S K
2016-05-01
Alzheimer's disease (AD) is a complex genetic disorder with no effective treatments. More than 20 common markers have been identified, which are associated with AD. Recently, several rare variants have been identified in Amyloid Precursor Protein (APP), Triggering Receptor Expressed On Myeloid Cells 2 (TREM2) and Unc-5 Netrin Receptor C (UNC5C) that affect risk for AD. Despite the many successes, the genetic architecture of AD remains unsolved. We used Genome-wide Complex Trait Analysis to (1) estimate phenotypic variance explained by genetics; (2) calculate genetic variance explained by known AD single nucleotide polymorphisms (SNPs); and (3) identify the genomic locations of variation that explain the remaining unexplained genetic variance. In total, 53.24% of phenotypic variance is explained by genetics, but known AD SNPs only explain 30.62% of the genetic variance. Of the unexplained genetic variance, approximately 41% is explained by unknown SNPs in regions adjacent to known AD SNPs, and the remaining unexplained genetic variance outside these regions. PMID:27036079
ERIC Educational Resources Information Center
UCLA IDEA, 2012
2012-01-01
Value added measures (VAM) uses changes in student test scores to determine how much "value" an individual teacher has "added" to student growth during the school year. Some policymakers, school districts, and educational advocates have applauded VAM as a straightforward measure of teacher effectiveness: the better a teacher, the better students…
Variance decomposition in stochastic simulators
NASA Astrophysics Data System (ADS)
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators.
Le Maître, O P; Knio, O M; Moraes, A
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Neutrino mass without cosmic variance
NASA Astrophysics Data System (ADS)
LoVerde, Marilena
2016-05-01
Measuring the absolute scale of the neutrino masses is one of the most exciting opportunities available with near-term cosmological data sets. Two quantities that are sensitive to neutrino mass, scale-dependent halo bias b (k ) and the linear growth parameter f (k ) inferred from redshift-space distortions, can be measured without cosmic variance. Unlike the amplitude of the matter power spectrum, which always has a finite error, the error on b (k ) and f (k ) continues to decrease as the number density of tracers increases. This paper presents forecasts for statistics of galaxy and lensing fields that are sensitive to neutrino mass via b (k ) and f (k ). The constraints on neutrino mass from the auto- and cross-power spectra of spectroscopic and photometric galaxy samples are weakened by scale-dependent bias unless a very high density of tracers is available. In the high-density limit, using multiple tracers allows cosmic variance to be beaten, and the forecasted errors on neutrino mass shrink dramatically. In practice, beating the cosmic-variance errors on neutrino mass with b (k ) will be a challenge, but this signal is nevertheless a new probe of neutrino effects on structure formation that is interesting in its own right.
Variance Components: Partialled vs. Common.
ERIC Educational Resources Information Center
Curtis, Ervin W.
1985-01-01
A new approach to partialling components is used. Like conventional partialling, this approach orthogonalizes variables by partitioning the scores or observations. Unlike conventional partialling, it yields a common component and two unique components. (Author/GDC)
Variance analysis. Part I, Extending flexible budget variance analysis to acuity.
Finkler, S A
1991-01-01
The author reviews the concepts of flexible budget variance analysis, including the price, quantity, and volume variances generated by that technique. He also introduces the concept of acuity variance and provides direction on how such a variance measure can be calculated. Part II in this two-part series on variance analysis will look at how personal computers can be useful in the variance analysis process. PMID:1870002
Practice reduces task relevant variance modulation and forms nominal trajectory
Osu, Rieko; Morishige, Ken-ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo
2015-01-01
Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary. PMID:26639942
Practice reduces task relevant variance modulation and forms nominal trajectory
NASA Astrophysics Data System (ADS)
Osu, Rieko; Morishige, Ken-Ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo
2015-12-01
Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary.
Practice reduces task relevant variance modulation and forms nominal trajectory.
Osu, Rieko; Morishige, Ken-ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo
2015-01-01
Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary. PMID:26639942
Maximization of total genetic variance in breed conservation programmes.
Cervantes, I; Meuwissen, T H E
2011-12-01
The preservation of the maximum genetic diversity in a population is one of the main objectives within a breed conservation programme. We applied the maximum variance total (MVT) method to a unique population in order to maximize the total genetic variance. The function maximization was performed by the annealing algorithm. We have selected the parents and the mating scheme at the same time simply maximizing the total genetic variance (a mate selection problem). The scenario was compared with a scenario of full-sib lines, a MVT scenario with a rate of inbreeding restriction, and with a minimum coancestry selection scenario. The MVT method produces sublines in a population attaining a similar scheme as the full-sib sublining that agrees with other authors that the maximum genetic diversity in a population (the lowest overall coancestry) is attained in the long term by subdividing it in as many isolated groups as possible. The application of a restriction on the rate of inbreeding jointly with the MVT method avoids the consequences of inbreeding depression and maintains the effective size at an acceptable minimum. The scenario of minimum coancestry selection gave higher effective size values, but a lower total genetic variance. A maximization of the total genetic variance ensures more genetic variation for extreme traits, which could be useful in case the population needs to adapt to a new environment/production system.
Warped functional analysis of variance.
Gervini, Daniel; Carter, Patrick A
2014-09-01
This article presents an Analysis of Variance model for functional data that explicitly incorporates phase variability through a time-warping component, allowing for a unified approach to estimation and inference in presence of amplitude and time variability. The focus is on single-random-factor models but the approach can be easily generalized to more complex ANOVA models. The behavior of the estimators is studied by simulation, and an application to the analysis of growth curves of flour beetles is presented. Although the model assumes a smooth latent process behind the observed trajectories, smootheness of the observed data is not required; the method can be applied to irregular time grids, which are common in longitudinal studies.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 29 Labor 7 2011-07-01 2011-07-01 false Variances. 1920.2 Section 1920.2 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR (CONTINUED...' COMPENSATION ACT § 1920.2 Variances. (a) Variances from standards in parts 1915 through 1918 of this...
10 CFR 851.31 - Variance process.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Variance process. 851.31 Section 851.31 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application..., practices, means, methods, operations, or processes used or proposed to be used by the contractor; and...
Increasing selection response by Bayesian modeling of heterogeneous environmental variances
Technology Transfer Automated Retrieval System (TEKTRAN)
Heterogeneity of environmental variance among genotypes reduces selection response because genotypes with higher variance are more likely to be selected than low-variance genotypes. Modeling heterogeneous variances to obtain weighted means corrected for heterogeneous variances is difficult in likel...
Restricted sample variance reduces generalizability.
Lakes, Kimberley D
2013-06-01
One factor that affects the reliability of observed scores is restriction of range on the construct measured for a particular group of study participants. This study illustrates how researchers can use generalizability theory to evaluate the impact of restriction of range in particular sample characteristics on the generalizability of test scores and to estimate how changes in measurement design could improve the generalizability of the test scores. An observer-rated measure of child self-regulation (Response to Challenge Scale; Lakes, 2011) is used to examine scores for 198 children (Grades K through 5) within the generalizability theory (GT) framework. The generalizability of ratings within relatively developmentally homogeneous samples is examined and illustrates the effect of reduced variance among ratees on generalizability. Forecasts for g coefficients of various D study designs demonstrate how higher generalizability could be achieved by increasing the number of raters or items. In summary, the research presented illustrates the importance of and procedures for evaluating the generalizability of a set of scores in a particular research context. PMID:23205627
Albacete, Javier L.; Kovchegov, Yuri V.; Taliotis, Anastasios
2009-03-23
We calculate the total cross section for the scattering of a quark-anti-quark dipole on a large nucleus at high energy for a strongly coupled N = 4 super Yang-Mills theory using AdS/CFT correspondence. We model the nucleus by a metric of a shock wave in AdS{sub 5}. We then calculate the expectation value of the Wilson loop (the dipole) by finding the extrema of the Nambu-Goto action for an open string attached to the quark and antiquark lines of the loop in the background of an AdS{sub 5} shock wave. We find two physically meaningful extremal string configurations. For both solutions we obtain the forward scattering amplitude N for the quark dipole-nucleus scattering. We study the onset of unitarity with increasing center-of-mass energy and transverse size of the dipole: we observe that for both solutions the saturation scale Q{sub s} is independent of energy/Bjorken-x and depends on the atomic number of the nucleus as Q{sub s}{approx}A{sup 1/3}. Finally we observe that while one of the solutions we found corresponds to the pomeron intercept of {alpha}{sub P} = 2 found earlier in the literature, when extended to higher energy or larger dipole sizes it violates the black disk limit. The other solution we found respects the black disk limit and yields the pomeron intercept of {alpha}{sub P} = 1.5. We thus conjecture that the right pomeron intercept in gauge theories at strong coupling may be {alpha}{sub P} = 1.5.
Enhancing area of review capabilities: Implementing a variance program
De Leon, F.
1995-12-01
The Railroad Commission of Texas (RRC) has regulated oil-field injection well operations since issuing its first injection permit in 1938. The Environmental Protection Agency (EPA) granted the RRC primary enforcement responsibility for the Class H Underground Injection Control (UIC) Program in April 1982. At that time, the added level of groundwater protection afforded by an Area of Review (AOR) on previously permitted Class H wells was not deemed necessary or cost effective. A proposed EPA rule change will require AORs to be performed on all pre-primacy Class II wells unless a variance can be justified. A variance methodology has been developed by researchers at the University of Missouri-Rolla in conjunction with the American Petroleum Institute (API). This paper will outline the RRC approach to implementing the AOR variance methodology. The RRC`s UIC program tracks 49,256 pre-primacy wells. Approximately 25,598 of these wells have active permits and will be subject to the proposed AOR requirements. The potential workload of performing AORs or granting variances for this many wells makes the development of a Geographic Information System (GIS) imperative. The RRC has recently completed a digitized map of the entire state and has spotted 890,000 of an estimated 1.2 million wells. Integrating this digital state map into a GIS will allow the RRC to tie its many data systems together. Once in place, this integrated data system will be used to evaluate AOR variances for pre-primacy wells on a field-wide basis. It will also reduce the regulatory cost of permitting by allowing the RRC staff to perform AORs or grant variances for the approximately 3,000 new and amended permit applications requiring AORs each year.
Infinite variance in fermion quantum Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.
Infinite variance in fermion quantum Monte Carlo calculations.
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling. PMID:27078480
Simulation testing of unbiasedness of variance estimators
Link, W.A.
1993-01-01
In this article I address the evaluation of estimators of variance for parameter estimates. Given an unbiased estimator X of a parameter 0, and an estimator V of the variance of X, how does one test (via simulation) whether V is an unbiased estimator of the variance of X? The derivation of the test statistic illustrates the need for care in substituting consistent estimators for unknown parameters.
Alzheimer's disease: Unique markers for diagnosis & new treatment modalities
Aggarwal, Neelum T.; Shah, Raj C.; Bennett, David A.
2015-01-01
Alzheimer's disease (AD) is a progressive and irreversible neurodegenerative disease. In humans, AD becomes symptomatic only after brain changes occur over years or decades. Three contiguous phases of AD have been proposed: (i) the AD pathophysiologic process, (ii) mild cognitive impairment due to AD, and (iii) AD dementia. Intensive research continues around the world on unique diagnostic markers and interventions associated with each phase of AD. In this review, we summarize the available evidence and new therapeutic approaches that target both amyloid and tau pathology in AD and discuss the biomarkers and pharmaceutical interventions available and in development for each AD phase. PMID:26609028
Perspective projection for variance pose face recognition from camera calibration
NASA Astrophysics Data System (ADS)
Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.
2016-04-01
Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.
Code of Federal Regulations, 2011 CFR
2011-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2010 CFR
2010-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Multireader multicase variance analysis for binary data.
Gallas, Brandon D; Pennello, Gene A; Myers, Kyle J
2007-12-01
Multireader multicase (MRMC) variance analysis has become widely utilized to analyze observer studies for which the summary measure is the area under the receiver operating characteristic (ROC) curve. We extend MRMC variance analysis to binary data and also to generic study designs in which every reader may not interpret every case. A subset of the fundamental moments central to MRMC variance analysis of the area under the ROC curve (AUC) is found to be required. Through multiple simulation configurations, we compare our unbiased variance estimates to naïve estimates across a range of study designs, average percent correct, and numbers of readers and cases.
Code of Federal Regulations, 2014 CFR
2014-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2013 CFR
2013-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2012 CFR
2012-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Creativity and technical innovation: spatial ability's unique role.
Kell, Harrison J; Lubinski, David; Benbow, Camilla P; Steiger, James H
2013-09-01
In the late 1970s, 563 intellectually talented 13-year-olds (identified by the SAT as in the top 0.5% of ability) were assessed on spatial ability. More than 30 years later, the present study evaluated whether spatial ability provided incremental validity (beyond the SAT's mathematical and verbal reasoning subtests) for differentially predicting which of these individuals had patents and three classes of refereed publications. A two-step discriminant-function analysis revealed that the SAT subtests jointly accounted for 10.8% of the variance among these outcomes (p < .01); when spatial ability was added, an additional 7.6% was accounted for--a statistically significant increase (p < .01). The findings indicate that spatial ability has a unique role in the development of creativity, beyond the roles played by the abilities traditionally measured in educational selection, counseling, and industrial-organizational psychology. Spatial ability plays a key and unique role in structuring many important psychological phenomena and should be examined more broadly across the applied and basic psychological sciences.
The return of the variance: intraspecific variability in community ecology.
Violle, Cyrille; Enquist, Brian J; McGill, Brian J; Jiang, Lin; Albert, Cécile H; Hulshof, Catherine; Jung, Vincent; Messier, Julie
2012-04-01
Despite being recognized as a promoter of diversity and a condition for local coexistence decades ago, the importance of intraspecific variance has been neglected over time in community ecology. Recently, there has been a new emphasis on intraspecific variability. Indeed, recent developments in trait-based community ecology have underlined the need to integrate variation at both the intraspecific as well as interspecific level. We introduce new T-statistics ('T' for trait), based on the comparison of intraspecific and interspecific variances of functional traits across organizational levels, to operationally incorporate intraspecific variability into community ecology theory. We show that a focus on the distribution of traits at local and regional scales combined with original analytical tools can provide unique insights into the primary forces structuring communities.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Variances. 1010.4 Section 1010.4 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) RADIOLOGICAL HEALTH... and Radiological Health, Food and Drug Administration, may grant a variance from one or...
Variance Design and Air Pollution Control
ERIC Educational Resources Information Center
Ferrar, Terry A.; Brownstein, Alan B.
1975-01-01
Air pollution control authorities were forced to relax air quality standards during the winter of 1972 by granting variances. This paper examines the institutional characteristics of these variance policies from an economic incentive standpoint, sets up desirable structural criteria for institutional design and arrives at policy guidelines for…
On Some Representations of Sample Variance
ERIC Educational Resources Information Center
Joarder, Anwar H.
2002-01-01
The usual formula for variance depending on rounding off the sample mean lacks precision, especially when computer programs are used for the calculation. The well-known simplification of the total sums of squares does not always give benefit. Since the variance of two observations is easily calculated without the use of a sample mean, and the…
Save money by understanding variance and tolerancing.
Stuart, K
2007-01-01
Manufacturing processes are inherently variable, which results in component and assembly variance. Unless process capability, variance and tolerancing are fully understood, incorrect design tolerances may be applied, which will lead to more expensive tooling, inflated production costs, high reject rates, product recalls and excessive warranty costs. A methodology is described for correctly allocating tolerances and performing appropriate analyses.
Nonlinear Epigenetic Variance: Review and Simulations
ERIC Educational Resources Information Center
Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.
2010-01-01
We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…
10 CFR 851.31 - Variance process.
Code of Federal Regulations, 2012 CFR
2012-01-01
... OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application... that the contractor has taken to inform the affected workers of the application, which must include... application and specifying where a copy may be examined at the place or places where notices to workers...
10 CFR 851.31 - Variance process.
Code of Federal Regulations, 2013 CFR
2013-01-01
... OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application... that the contractor has taken to inform the affected workers of the application, which must include... application and specifying where a copy may be examined at the place or places where notices to workers...
10 CFR 851.31 - Variance process.
Code of Federal Regulations, 2011 CFR
2011-01-01
... OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application... that the contractor has taken to inform the affected workers of the application, which must include... application and specifying where a copy may be examined at the place or places where notices to workers...
Portfolio optimization with mean-variance model
NASA Astrophysics Data System (ADS)
Hoe, Lam Weng; Siew, Lam Weng
2016-06-01
Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.
42 CFR 456.525 - Request for renewal of variance.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time... variance to the Administrator at least 30 days before the variance expires. (b) The renewal request...
Portfolio optimization using median-variance approach
NASA Astrophysics Data System (ADS)
Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli
2013-04-01
Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.
Component Processes in Reading: Shared and Unique Variance in Serial and Isolated Naming Speed
ERIC Educational Resources Information Center
Logan, Jessica A. R.; Schatschneider, Christopher
2014-01-01
Reading ability is comprised of several component processes. In particular, the connection between the visual and verbal systems has been demonstrated to play an important role in the reading process. The present study provides a review of the existing literature on the visual verbal connection as measured by two tasks, rapid serial naming and…
Commonality Analysis: A Method of Analyzing Unique and Common Variance Proportions.
ERIC Educational Resources Information Center
Kroff, Michael W.
This paper considers the use of commonality analysis as an effective tool for analyzing relationships between variables in multiple regression or canonical correlational analysis (CCA). The merits of commonality analysis are discussed and the procedure for running commonality analysis is summarized as a four-step process. A heuristic example is…
ERIC Educational Resources Information Center
Brotheridge, Celeste M.; Power, Jacqueline L.
2008-01-01
Purpose: This study seeks to examine the extent to which the use of career center services results in the significant incremental prediction of career outcomes beyond its established predictors. Design/methodology/approach: The authors survey the clients of a public agency's career center and use hierarchical multiple regressions in order to…
AdS6 solutions of type II supergravity
NASA Astrophysics Data System (ADS)
Apruzzi, Fabio; Fazzi, Marco; Passias, Achilleas; Rosa, Dario; Tomasiello, Alessandro
2014-11-01
Very few AdS6 × M 4 supersymmetric solutions are known: one in massive IIA, and two IIB solutions dual to it. The IIA solution is known to be unique; in this paper, we use the pure spinor approach to give a classification for IIB supergravity. We reduce the problem to two PDEs on a two-dimensional space Σ. M 4 is then a fibration of S 2 over Σ; the metric and fluxes are completely determined in terms of the solution to the PDEs. The results seem likely to accommodate near-horizon limits of ( p, q)-fivebrane webs studied in the literature as a source of CFT5's. We also show that there are no AdS6 solutions in eleven-dimensional supergravity.
Nonorthogonal Analysis of Variance Programs: An Evaluation.
ERIC Educational Resources Information Center
Hosking, James D.; Hamer, Robert M.
1979-01-01
Six computer programs for four methods of nonorthogonal analysis of variance are compared for capabilities, accuracy, cost, transportability, quality of documentation, associated computational capabilities, and ease of use: OSIRIS; SAS; SPSS; MANOVA; BMDP2V; and MULTIVARIANCE. (CTM)
40 CFR 142.41 - Variance request.
Code of Federal Regulations, 2010 CFR
2010-07-01
... and evidence of the best available treatment technology and techniques. (2) Economic and legal factors... water in the case of an excessive rise in the contaminant level for which the variance is requested....
Code of Federal Regulations, 2011 CFR
2011-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2010 CFR
2010-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
7 CFR 205.290 - Temporary variances.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) ORGANIC FOODS PRODUCTION ACT... notify each production or handling operation it certifies to which the temporary variance applies....
Reducing variance in batch partitioning measurements
Mariner, Paul E.
2010-08-11
The partitioning experiment is commonly performed with little or no attention to reducing measurement variance. Batch test procedures such as those used to measure K{sub d} values (e.g., ASTM D 4646 and EPA402 -R-99-004A) do not explain how to evaluate measurement uncertainty nor how to minimize measurement variance. In fact, ASTM D 4646 prescribes a sorbent:water ratio that prevents variance minimization. Consequently, the variance of a set of partitioning measurements can be extreme and even absurd. Such data sets, which are commonplace, hamper probabilistic modeling efforts. An error-savvy design requires adjustment of the solution:sorbent ratio so that approximately half of the sorbate partitions to the sorbent. Results of Monte Carlo simulations indicate that this simple step can markedly improve the precision and statistical characterization of partitioning uncertainty.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Variances. 307.22 Section 307.22 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC... Federal, State and local law....
Code of Federal Regulations, 2013 CFR
2013-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2012 CFR
2012-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2014 CFR
2014-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Variance Components in Discrete Force Production Tasks
SKM, Varadhan; Zatsiorsky, Vladimir M.; Latash, Mark L.
2010-01-01
The study addresses the relationships between task parameters and two components of variance, “good” and “bad”, during multi-finger accurate force production. The variance components are defined in the space of commands to the fingers (finger modes) and refer to variance that does (“bad”) and does not (“good”) affect total force. Based on an earlier study of cyclic force production, we hypothesized that speeding-up an accurate force production task would be accompanied by a drop in the regression coefficient linking the “bad” variance and force rate such that variance of the total force remains largely unaffected. We also explored changes in parameters of anticipatory synergy adjustments with speeding-up the task. The subjects produced accurate ramps of total force over different times and in different directions (force-up and force-down) while pressing with the four fingers of the right hand on individual force sensors. The two variance components were quantified, and their normalized difference was used as an index of a total force stabilizing synergy. “Good” variance scaled linearly with force magnitude and did not depend on force rate. “Bad” variance scaled linearly with force rate within each task, and the scaling coefficient did not change across tasks with different ramp times. As a result, a drop in force ramp time was associated with an increase in total force variance, unlike the results of the study of cyclic tasks. The synergy index dropped 100-200 ms prior to the first visible signs of force change. The timing and magnitude of these anticipatory synergy adjustments did not depend on the ramp time. Analysis of the data within an earlier model has shown adjustments in the variance of a timing parameter, although these adjustments were not as pronounced as in the earlier study of cyclic force production. Overall, we observed qualitative differences between the discrete and cyclic force production tasks: Speeding-up the cyclic
Variational bayesian method of estimating variance components.
Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi
2016-07-01
We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling.
Variance estimation for nucleotide substitution models.
Chen, Weishan; Wang, Hsiuying
2015-09-01
The current variance estimators for most evolutionary models were derived when a nucleotide substitution number estimator was approximated with a simple first order Taylor expansion. In this study, we derive three variance estimators for the F81, F84, HKY85 and TN93 nucleotide substitution models, respectively. They are obtained using the second order Taylor expansion of the substitution number estimator, the first order Taylor expansion of a squared deviation and the second order Taylor expansion of a squared deviation, respectively. These variance estimators are compared with the existing variance estimator in terms of a simulation study. It shows that the variance estimator, which is derived using the second order Taylor expansion of a squared deviation, is more accurate than the other three estimators. In addition, we also compare these estimators with an estimator derived by the bootstrap method. The simulation shows that the performance of this bootstrap estimator is similar to the estimator derived by the second order Taylor expansion of a squared deviation. Since the latter one has an explicit form, it is more efficient than the bootstrap estimator.
A noise variance estimation approach for CT
NASA Astrophysics Data System (ADS)
Shen, Le; Jin, Xin; Xing, Yuxiang
2012-10-01
The Poisson-like noise model has been widely used for noise suppression and image reconstruction in low dose computed tomography. Various noise estimation and suppression approaches have been developed and studied to enhance the image quality. Among them, the recently proposed generalized Anscombe transform (GAT) has been utilized to stabilize the variance of Poisson-Gaussian noise. In this paper, we present a variance estimation approach using GAT. After the transform, the projection data is denoised conventionally with an assumption that the noise variance is uniformly equals to 1. The difference of the original and the denoised projection is treated as pure noise and the global variance σ2 can be estimated from the residual difference. Thus, the final denoising step with the estimated σ2 is performed. The proposed approach is verified on a cone-beam CT system and demonstrated to obtain a more accurate estimation of the actual parameter. We also examine FBP algorithm with the two-step noise suppression in the projection domain using the estimated noise variance. Reconstruction results with simulated and practical projection data suggest that the presented approach could be effective in practical imaging applications.
NASA Astrophysics Data System (ADS)
Callebaut, Nele; Gubser, Steven S.; Samberg, Andreas; Toldo, Chiara
2015-11-01
We study segmented strings in flat space and in AdS 3. In flat space, these well known classical motions describe strings which at any instant of time are piecewise linear. In AdS 3, the worldsheet is composed of faces each of which is a region bounded by null geodesics in an AdS 2 subspace of AdS 3. The time evolution can be described by specifying the null geodesic motion of kinks in the string at which two segments are joined. The outcome of collisions of kinks on the worldsheet can be worked out essentially using considerations of causality. We study several examples of closed segmented strings in AdS 3 and find an unexpected quasi-periodic behavior. We also work out a WKB analysis of quantum states of yo-yo strings in AdS 5 and find a logarithmic term reminiscent of the logarithmic twist of string states on the leading Regge trajectory.
NASA Astrophysics Data System (ADS)
Costa, Miguel S.; Greenspan, Lauren; Oliveira, Miguel; Penedones, João; Santos, Jorge E.
2016-06-01
We consider solutions in Einstein-Maxwell theory with a negative cosmological constant that asymptote to global AdS 4 with conformal boundary {S}2× {{{R}}}t. At the sphere at infinity we turn on a space-dependent electrostatic potential, which does not destroy the asymptotic AdS behaviour. For simplicity we focus on the case of a dipolar electrostatic potential. We find two new geometries: (i) an AdS soliton that includes the full backreaction of the electric field on the AdS geometry; (ii) a polarised neutral black hole that is deformed by the electric field, accumulating opposite charges in each hemisphere. For both geometries we study boundary data such as the charge density and the stress tensor. For the black hole we also study the horizon charge density and area, and further verify a Smarr formula. Then we consider this system at finite temperature and compute the Gibbs free energy for both AdS soliton and black hole phases. The corresponding phase diagram generalizes the Hawking-Page phase transition. The AdS soliton dominates the low temperature phase and the black hole the high temperature phase, with a critical temperature that decreases as the external electric field increases. Finally, we consider the simple case of a free charged scalar field on {S}2× {{{R}}}t with conformal coupling. For a field in the SU(N ) adjoint representation we compare the phase diagram with the above gravitational system.
Integrating Variances into an Analytical Database
NASA Technical Reports Server (NTRS)
Sanchez, Carlos
2010-01-01
For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.
Wave propagation analysis using the variance matrix.
Sharma, Richa; Ivan, J Solomon; Narayanamurthy, C S
2014-10-01
The propagation of a coherent laser wave-field through a pseudo-random phase plate is studied using the variance matrix estimated from Shack-Hartmann wavefront sensor data. The uncertainty principle is used as a tool in discriminating the data obtained from the Shack-Hartmann wavefront sensor. Quantities of physical interest such as the twist parameter, and the symplectic eigenvalues, are estimated from the wavefront sensor measurements. A distance measure between two variance matrices is introduced and used to estimate the spatial asymmetry of a wave-field in the experiment. The estimated quantities are then used to compare a distorted wave-field with its undistorted counterpart. PMID:25401243
GR uniqueness and deformations
NASA Astrophysics Data System (ADS)
Krasnov, Kirill
2015-10-01
In the metric formulation gravitons are described with the parity symmetric S + 2 ⊗ S - 2 representation of Lorentz group. General Relativity is then the unique theory of interacting gravitons with second order field equations. We show that if a chiral S + 3 ⊗ S - representation is used instead, the uniqueness is lost, and there is an infinite-parametric family of theories of interacting gravitons with second order field equations. We use the language of graviton scattering amplitudes, and show how the uniqueness of GR is avoided using simple dimensional analysis. The resulting distinct from GR gravity theories are all parity asymmetric, but share the GR MHV amplitudes. They have new all same helicity graviton scattering amplitudes at every graviton order. The amplitudes with at least one graviton of opposite helicity continue to be determinable by the BCFW recursion.
Code of Federal Regulations, 2010 CFR
2010-01-01
... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2012 CFR
2012-01-01
... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2013 CFR
2013-01-01
... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2014 CFR
2014-01-01
... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2011 CFR
2011-01-01
... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Regression Calibration with Heteroscedastic Error Variance
Spiegelman, Donna; Logan, Roger; Grove, Douglas
2011-01-01
The problem of covariate measurement error with heteroscedastic measurement error variance is considered. Standard regression calibration assumes that the measurement error has a homoscedastic measurement error variance. An estimator is proposed to correct regression coefficients for covariate measurement error with heteroscedastic variance. Point and interval estimates are derived. Validation data containing the gold standard must be available. This estimator is a closed-form correction of the uncorrected primary regression coefficients, which may be of logistic or Cox proportional hazards model form, and is closely related to the version of regression calibration developed by Rosner et al. (1990). The primary regression model can include multiple covariates measured without error. The use of these estimators is illustrated in two data sets, one taken from occupational epidemiology (the ACE study) and one taken from nutritional epidemiology (the Nurses’ Health Study). In both cases, although there was evidence of moderate heteroscedasticity, there was little difference in estimation or inference using this new procedure compared to standard regression calibration. It is shown theoretically that unless the relative risk is large or measurement error severe, standard regression calibration approximations will typically be adequate, even with moderate heteroscedasticity in the measurement error model variance. In a detailed simulation study, standard regression calibration performed either as well as or better than the new estimator. When the disease is rare and the errors normally distributed, or when measurement error is moderate, standard regression calibration remains the method of choice. PMID:22848187
Code of Federal Regulations, 2010 CFR
2010-04-01
... 18 Conservation of Power and Water Resources 2 2010-04-01 2010-04-01 false Variances. 1304.408 Section 1304.408 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY APPROVAL OF... whether a proposed structure or other regulated activity would adversely impact navigation, flood...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 18 Conservation of Power and Water Resources 2 2011-04-01 2011-04-01 false Variances. 1304.408 Section 1304.408 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY APPROVAL OF... whether a proposed structure or other regulated activity would adversely impact navigation, flood...
Multiple Comparison Procedures when Population Variances Differ.
ERIC Educational Resources Information Center
Olejnik, Stephen; Lee, JaeShin
A review of the literature on multiple comparison procedures suggests several alternative approaches for comparing means when population variances differ. These include: (1) the approach of P. A. Games and J. F. Howell (1976); (2) C. W. Dunnett's C confidence interval (1980); and (3) Dunnett's T3 solution (1980). These procedures control the…
Variance Anisotropy of Solar Wind fluctuations
NASA Astrophysics Data System (ADS)
Oughton, S.; Matthaeus, W. H.; Wan, M.; Osman, K.
2013-12-01
Solar wind observations at MHD scales indicate that the energy associated with velocity and magnetic field fluctuations transverse to the mean magnetic field is typically much larger than that associated with parallel fluctuations [eg, 1]. This is often referred to as variance anisotropy. Various explanations for it have been suggested, including that the fluctuations are predominantly shear Alfven waves [1] and that turbulent dynamics leads to such states [eg, 2]. Here we investigate the origin and strength of such variance anisotropies, using spectral method simulations of the compressible (polytropic) 3D MHD equations. We report on results from runs with initial conditions that are either (i) broadband turbulence or (ii) fluctuations polarized in the same sense as shear Alfven waves. The dependence of the variance anisotropy on the plasma beta and Mach number is examined [3], along with the timescale for any variance anisotropy to develop. Implications for solar wind fluctuations will be discussed. References: [1] Belcher, J. W. and Davis Jr., L. (1971), J. Geophys. Res., 76, 3534. [2] Matthaeus, W. H., Ghosh, S., Oughton, S. and Roberts, D. A. (1996), J. Geophys. Res., 101, 7619. [3] Smith, C. W., B. J. Vasquez and K. Hamilton (2006), J. Geophys. Res., 111, A09111.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) or 6(d) of the Williams-Steiger Occupational Safety and Health Act of 1970 (29 U.S.C. 655). The... under the Williams-Steiger Occupational Safety and Health Act of 1970, and any variance from §§ 1910.13... from the standard under both the Longshoremen's and Harbor Workers' Compensation Act and the...
7 CFR 205.290 - Temporary variances.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Temporary variances. 205.290 Section 205.290 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) ORGANIC FOODS PRODUCTION...
Number variance for arithmetic hyperbolic surfaces
NASA Astrophysics Data System (ADS)
Luo, W.; Sarnak, P.
1994-03-01
We prove that the number variance for the spectrum of an arithmetic surface is highly nonrigid in part of the universal range. In fact it is close to having a Poisson behavior. This fact was discovered numerically by Schmit, Bogomolny, Georgeot and Giannoni. It has its origin in the high degeneracy of the length spectrum, first observed by Selberg.
7 CFR 205.290 - Temporary variances.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Temporary variances. 205.290 Section 205.290 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) ORGANIC FOODS PRODUCTION ACT PROVISIONS NATIONAL ORGANIC PROGRAM...
Formative Use of Intuitive Analysis of Variance
ERIC Educational Resources Information Center
Trumpower, David L.
2013-01-01
Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In…
Code of Federal Regulations, 2011 CFR
2011-04-01
...' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR SPECIAL RESPONSIBILITIES OF THE EMPLOYMENT SERVICE SYSTEM Housing for Agricultural Workers Purpose and Applicability § 654.402 Variances. (a... employment service complaint procedures set forth at §§ 658.421 (i) and (j), 658.422 and 658.423 of...
78 FR 14122 - Revocation of Permanent Variances
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-04
... OSHA's scaffolds standards for construction (77 FR 46948). Today's notice revoking the variances takes..., construction, and use of scaffolds (61 FR 46026). In the preamble to the final rule, OSHA stated that it was... for tank scaffolds under the general provisions of the final rule (see 61 FR 46033). In this...
ERIC Educational Resources Information Center
Goble, Don
2009-01-01
This article describes the many learning opportunities that broadcast technology students at Ladue Horton Watkins High School in St. Louis, Missouri, experience because of their unique access to technology and methods of learning. Through scaffolding, stepladder techniques, and trial by fire, students learn to produce multiple television programs,…
Estimating Modifying Effect of Age on Genetic and Environmental Variance Components in Twin Models.
He, Liang; Sillanpää, Mikko J; Silventoinen, Karri; Kaprio, Jaakko; Pitkäniemi, Janne
2016-04-01
Twin studies have been adopted for decades to disentangle the relative genetic and environmental contributions for a wide range of traits. However, heritability estimation based on the classical twin models does not take into account dynamic behavior of the variance components over age. Varying variance of the genetic component over age can imply the existence of gene-environment (G×E) interactions that general genome-wide association studies (GWAS) fail to capture, which may lead to the inconsistency of heritability estimates between twin design and GWAS. Existing parametricG×Einteraction models for twin studies are limited by assuming a linear or quadratic form of the variance curves with respect to a moderator that can, however, be overly restricted in reality. Here we propose spline-based approaches to explore the variance curves of the genetic and environmental components. We choose the additive genetic, common, and unique environmental variance components (ACE) model as the starting point. We treat the component variances as variance functions with respect to age modeled by B-splines or P-splines. We develop an empirical Bayes method to estimate the variance curves together with their confidence bands and provide an R package for public use. Our simulations demonstrate that the proposed methods accurately capture dynamic behavior of the component variances in terms of mean square errors with a data set of >10,000 twin pairs. Using the proposed methods as an alternative and major extension to the classical twin models, our analyses with a large-scale Finnish twin data set (19,510 MZ twins and 27,312 DZ same-sex twins) discover that the variances of the A, C, and E components for body mass index (BMI) change substantially across life span in different patterns and the heritability of BMI drops to ∼50% after middle age. The results further indicate that the decline of heritability is due to increasing unique environmental variance, which provides more
NASA Astrophysics Data System (ADS)
Morales, Jose F.; Samtleben, Henning
2003-06-01
We review recent work on the holographic duals of type II and heterotic matrix string theories described by warped AdS3 supergravities. In particular, we compute the spectra of Kaluza-Klein primaries for type I, II supergravities on warped AdS3 × S7 and match them with the primary operators in the dual two-dimensional gauge theories. The presence of non-trivial warp factors and dilaton profiles requires a modification of the familiar dictionary between masses and 'scaling' dimensions of fields and operators. We present these modifications for the general case of domain wall/QFT correspondences between supergravities on warped AdSd+1 × Sq geometries and super Yang-Mills theories with 16 supercharges.
42 CFR 456.521 - Conditions for granting variance requests.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time... submits concurrently— (1) A request for the variance that documents to his satisfaction that the facility is unable to meet the time requirements for which the variance is requested; and (2) A revised...
Variance Reduction Factor of Nuclear Data for Integral Neutronics Parameters
Chiba, G. Tsuji, M.; Narabayashi, T.
2015-01-15
We propose a new quantity, a variance reduction factor, to identify nuclear data for which further improvements are required to reduce uncertainties of target integral neutronics parameters. Important energy ranges can be also identified with this variance reduction factor. Variance reduction factors are calculated for several integral neutronics parameters. The usefulness of the variance reduction factors is demonstrated.
Analysis of Variance of Multiply Imputed Data.
van Ginkel, Joost R; Kroonenberg, Pieter M
2014-01-01
As a procedure for handling missing data, Multiple imputation consists of estimating the missing data multiple times to create several complete versions of an incomplete data set. All these data sets are analyzed by the same statistical procedure, and the results are pooled for interpretation. So far, no explicit rules for pooling F-tests of (repeated-measures) analysis of variance have been defined. In this paper we outline the appropriate procedure for the results of analysis of variance for multiply imputed data sets. It involves both reformulation of the ANOVA model as a regression model using effect coding of the predictors and applying already existing combination rules for regression models. The proposed procedure is illustrated using three example data sets. The pooled results of these three examples provide plausible F- and p-values.
Analysis of variance of microarray data.
Ayroles, Julien F; Gibson, Greg
2006-01-01
Analysis of variance (ANOVA) is an approach used to identify differentially expressed genes in complex experimental designs. It is based on testing for the significance of the magnitude of effect of two or more treatments taking into account the variance within and between treatment classes. ANOVA is a highly flexible analytical approach that allows investigators to simultaneously assess the contributions of multiple factors to gene expression variation, including technical (dye, batch) effects and biological (sex, genotype, drug, time) ones, as well as interactions between factors. This chapter provides an overview of the theory of linear mixture modeling and the sequence of steps involved in fitting gene-specific models and discusses essential features of experimental design. Commercial and open-source software for performing ANOVA is widely available.
PHD filtering with localised target number variance
NASA Astrophysics Data System (ADS)
Delande, Emmanuel; Houssineau, Jérémie; Clark, Daniel
2013-05-01
Mahler's Probability Hypothesis Density (PHD filter), proposed in 2000, addresses the challenges of the multipletarget detection and tracking problem by propagating a mean density of the targets in any region of the state space. However, when retrieving some local evidence on the target presence becomes a critical component of a larger process - e.g. for sensor management purposes - the local target number is insufficient unless some confidence on the estimation of the number of targets can be provided as well. In this paper, we propose a first implementation of a PHD filter that also includes an estimation of localised variance in the target number following each update step; we then illustrate the advantage of the PHD filter + variance on simulated data from a multiple-target scenario.
Uses and abuses of analysis of variance.
Evans, S J
1983-01-01
Analysis of variance is a term often quoted to explain the analysis of data in experiments and clinical trials. The relevance of its methodology to clinical trials is shown and an explanation of the principles of the technique is given. The assumptions necessary are examined and the problems caused by their violation are discussed. The dangers of misuse are given with some suggestions for alternative approaches. PMID:6347228
Applications of non-parametric statistics and analysis of variance on sample variances
NASA Technical Reports Server (NTRS)
Myers, R. H.
1981-01-01
Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.
Hypothesis exploration with visualization of variance
2014-01-01
Background The Consortium for Neuropsychiatric Phenomics (CNP) at UCLA was an investigation into the biological bases of traits such as memory and response inhibition phenotypes—to explore whether they are linked to syndromes including ADHD, Bipolar disorder, and Schizophrenia. An aim of the consortium was in moving from traditional categorical approaches for psychiatric syndromes towards more quantitative approaches based on large-scale analysis of the space of human variation. It represented an application of phenomics—wide-scale, systematic study of phenotypes—to neuropsychiatry research. Results This paper reports on a system for exploration of hypotheses in data obtained from the LA2K, LA3C, and LA5C studies in CNP. ViVA is a system for exploratory data analysis using novel mathematical models and methods for visualization of variance. An example of these methods is called VISOVA, a combination of visualization and analysis of variance, with the flavor of exploration associated with ANOVA in biomedical hypothesis generation. It permits visual identification of phenotype profiles—patterns of values across phenotypes—that characterize groups. Visualization enables screening and refinement of hypotheses about variance structure of sets of phenotypes. Conclusions The ViVA system was designed for exploration of neuropsychiatric hypotheses by interdisciplinary teams. Automated visualization in ViVA supports ‘natural selection’ on a pool of hypotheses, and permits deeper understanding of the statistical architecture of the data. Large-scale perspective of this kind could lead to better neuropsychiatric diagnostics. PMID:25097666
Mixed emotions: Sensitivity to facial variance in a crowd of faces.
Haberman, Jason; Lee, Pegan; Whitney, David
2015-01-01
The visual system automatically represents summary information from crowds of faces, such as the average expression. This is a useful heuristic insofar as it provides critical information about the state of the world, not simply information about the state of one individual. However, the average alone is not sufficient for making decisions about how to respond to a crowd. The variance or heterogeneity of the crowd--the mixture of emotions--conveys information about the reliability of the average, essential for determining whether the average can be trusted. Despite its importance, the representation of variance within a crowd of faces has yet to be examined. This is addressed here in three experiments. In the first experiment, observers viewed a sample set of faces that varied in emotion, and then adjusted a subsequent set to match the variance of the sample set. To isolate variance as the summary statistic of interest, the average emotion of both sets was random. Results suggested that observers had information regarding crowd variance. The second experiment verified that this was indeed a uniquely high-level phenomenon, as observers were unable to derive the variance of an inverted set of faces as precisely as an upright set of faces. The third experiment replicated and extended the first two experiments using method-of-constant-stimuli. Together, these results show that the visual system is sensitive to emergent information about the emotional heterogeneity, or ambivalence, in crowds of faces. PMID:26676106
Abel, David L.
2011-01-01
Is life physicochemically unique? No. Is life unique? Yes. Life manifests innumerable formalisms that cannot be generated or explained by physicodynamics alone. Life pursues thousands of biofunctional goals, not the least of which is staying alive. Neither physicodynamics, nor evolution, pursue goals. Life is largely directed by linear digital programming and by the Prescriptive Information (PI) instantiated particularly into physicodynamically indeterminate nucleotide sequencing. Epigenomic controls only compound the sophistication of these formalisms. Life employs representationalism through the use of symbol systems. Life manifests autonomy, homeostasis far from equilibrium in the harshest of environments, positive and negative feedback mechanisms, prevention and correction of its own errors, and organization of its components into Sustained Functional Systems (SFS). Chance and necessity—heat agitation and the cause-and-effect determinism of nature’s orderliness—cannot spawn formalisms such as mathematics, language, symbol systems, coding, decoding, logic, organization (not to be confused with mere self-ordering), integration of circuits, computational success, and the pursuit of functionality. All of these characteristics of life are formal, not physical. PMID:25382119
Visual SLAM Using Variance Grid Maps
NASA Technical Reports Server (NTRS)
Howard, Andrew B.; Marks, Tim K.
2011-01-01
An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance
Culverhouse, Robert C.; Saccone, Nancy L.; Stitzel, Jerry A.; Wang, Jen C.; Steinbach, Joseph H.; Goate, Alison M.; Schwantes-An, Tae-Hwi; Grucza, Richard A.; Stevens, Victoria L.; Bierut, Laura J.
2010-01-01
Results from genome-wide association studies of complex traits account for only a modest proportion of the trait variance predicted to be due to genetics. We hypothesize that joint analysis of polymorphisms may account for more variance. We evaluated this hypothesis on a case–control smoking phenotype by examining pairs of nicotinic receptor single-nucleotide polymorphisms (SNPs) using the Restricted Partition Method (RPM) on data from the Collaborative Genetic Study of Nicotine Dependence (COGEND). We found evidence of joint effects that increase explained variance. Four signals identified in COGEND were testable in independent American Cancer Society (ACS) data, and three of the four signals replicated. Our results highlight two important lessons: joint effects that increase the explained variance are not limited to loci displaying substantial main effects, and joint effects need not display a significant interaction term in a logistic regression model. These results suggest that the joint analyses of variants may indeed account for part of the genetic variance left unexplained by single SNP analyses. Methodologies that limit analyses of joint effects to variants that demonstrate association in single SNP analyses, or require a significant interaction term, will likely miss important joint effects. PMID:21079997
Agricultural Education: Value Adding.
ERIC Educational Resources Information Center
Riesenberg, Lou E.; And Others
1989-01-01
This issue develops the theme of "Agricultural Education--Value Adding." The concept value adding has been a staple in the world of agricultural business for describing adding value to a commodity that would profit the producer and the local community. Agricultural education should add value to individuals and society to justify agricultural…
Clarke, Peter; Varghese, Philip; Goldstein, David
2014-12-09
We extend a variance reduced discrete velocity method developed at UT Austin [1, 2] to gas mixtures with large mass ratios and flows with trace species. The mixture is stored as a collection of independent velocity distribution functions, each with a unique grid in velocity space. Different collision types (A-A, A-B, B-B, etc.) are treated independently, and the variance reduction scheme is formulated with different equilibrium functions for each separate collision type. The individual treatment of species enables increased focus on species important to the physics of the flow, even if the important species are present in trace amounts. The method is verified through comparisons to Direct Simulation Monte Carlo computations and the computational workload per time step is investigated for the variance reduced method.
Calculating bone-lead measurement variance.
Todd, A C
2000-01-01
The technique of (109)Cd-based X-ray fluorescence (XRF) measurements of lead in bone is well established. A paper by some XRF researchers [Gordon CL, et al. The Reproducibility of (109)Cd-based X-ray Fluorescence Measurements of Bone Lead. Environ Health Perspect 102:690-694 (1994)] presented the currently practiced method for calculating the variance of an in vivo measurement once a calibration line has been established. This paper corrects typographical errors in the method published by those authors; presents a crude estimate of the measurement error that can be acquired without computational peak fitting programs; and draws attention to the measurement error attributable to covariance, an important feature in the construct of the currently accepted method that is flawed under certain circumstances. PMID:10811562
42 CFR 456.522 - Content of request for variance.
Code of Federal Regulations, 2010 CFR
2010-10-01
... travel time between the remote facility and each facility listed in paragraph (e) of this section; (f..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time... perform UR within the time requirements for which the variance is requested and its good faith efforts...
Dynamics of mean-variance-skewness of cumulative crop yield impact temporal yield variance
Technology Transfer Automated Retrieval System (TEKTRAN)
Production risk associated with cropping systems influences farmers’ decisions to adopt a new management practice or a production system. Cumulative yield (CY), temporal yield variance (TYV) and coefficient of variation (CV) were used to assess the risk associated with adopting combinations of new m...
The Parabolic Variance (PVAR): A Wavelet Variance Based on the Least-Square Fit.
Vernotte, Francois; Lenczner, Michel; Bourgeois, Pierre-Yves; Rubiola, Enrico
2016-04-01
This paper introduces the parabolic variance (PVAR), a wavelet variance similar to the Allan variance (AVAR), based on the linear regression (LR) of phase data. The companion article arXiv:1506.05009 [physics.ins-det] details the Ω frequency counter, which implements the LR estimate. The PVAR combines the advantages of AVAR and modified AVAR (MVAR). PVAR is good for long-term analysis because the wavelet spans over 2τ, the same as the AVAR wavelet, and good for short-term analysis because the response to white and flicker PM is 1/τ(3) and 1/τ(2), the same as the MVAR. After setting the theoretical framework, we study the degrees of freedom and the confidence interval for the most common noise types. Then, we focus on the detection of a weak noise process at the transition-or corner-where a faster process rolls off. This new perspective raises the question of which variance detects the weak process with the shortest data record. Our simulations show that PVAR is a fortunate tradeoff. PVAR is superior to MVAR in all cases, exhibits the best ability to divide between fast noise phenomena (up to flicker FM), and is almost as good as AVAR for the detection of random walk and drift. PMID:26571523
The Parabolic Variance (PVAR): A Wavelet Variance Based on the Least-Square Fit.
Vernotte, Francois; Lenczner, Michel; Bourgeois, Pierre-Yves; Rubiola, Enrico
2016-04-01
This paper introduces the parabolic variance (PVAR), a wavelet variance similar to the Allan variance (AVAR), based on the linear regression (LR) of phase data. The companion article arXiv:1506.05009 [physics.ins-det] details the Ω frequency counter, which implements the LR estimate. The PVAR combines the advantages of AVAR and modified AVAR (MVAR). PVAR is good for long-term analysis because the wavelet spans over 2τ, the same as the AVAR wavelet, and good for short-term analysis because the response to white and flicker PM is 1/τ(3) and 1/τ(2), the same as the MVAR. After setting the theoretical framework, we study the degrees of freedom and the confidence interval for the most common noise types. Then, we focus on the detection of a weak noise process at the transition-or corner-where a faster process rolls off. This new perspective raises the question of which variance detects the weak process with the shortest data record. Our simulations show that PVAR is a fortunate tradeoff. PVAR is superior to MVAR in all cases, exhibits the best ability to divide between fast noise phenomena (up to flicker FM), and is almost as good as AVAR for the detection of random walk and drift.
NASA Astrophysics Data System (ADS)
Ammon, Martin; Erdmenger, Johanna; Meyer, René; O'Bannon, Andy; Wrase, Timm
2009-11-01
Aharony, Bergman, Jafferis, and Maldacena have proposed that the low-energy description of multiple M2-branes at a Bbb C4/Bbb Zk singularity is a (2+1)-dimensional Script N = 6 supersymmetric U(Nc) × U(Nc) Chern-Simons matter theory, the ABJM theory. In the large-Nc limit, its holographic dual is supergravity in AdS4 × S7/Bbb Zk. We study various ways to add fields that transform in the fundamental representation of the gauge groups, i.e. flavor fields, to the ABJM theory. We work in a probe limit and perform analyses in both the supergravity and field theory descriptions. In the supergravity description we find a large class of supersymmetric embeddings of probe flavor branes. In the field theory description, we present a general method to determine the couplings of the flavor fields to the fields of the ABJM theory. We then study four examples in detail: codimension-zero Script N = 3 supersymmetric flavor, described in supergravity by Kaluza-Klein monopoles or D6-branes; codimension-one Script N = (0,6) supersymmetric chiral flavor, described by D8-branes; codimension-one Script N = (3,3) supersymmetric non-chiral flavor, described by M5/D4-branes; codimension-two Script N = 4 supersymmetric flavor, described by M2/D2-branes. Finally we discuss special physical equivalences between brane embeddings in M-theory, and their interpretation in the field theory description.
NASA Astrophysics Data System (ADS)
Adamo, Tim; Skinner, David; Williams, Jack
2016-08-01
We consider the application of twistor theory to five-dimensional anti-de Sitter space. The twistor space of AdS5 is the same as the ambitwistor space of the four-dimensional conformal boundary; the geometry of this correspondence is reviewed for both the bulk and boundary. A Penrose transform allows us to describe free bulk fields, with or without mass, in terms of data on twistor space. Explicit representatives for the bulk-to-boundary propagators of scalars and spinors are constructed, along with twistor action functionals for the free theories. Evaluating these twistor actions on bulk-to-boundary propagators is shown to produce the correct two-point functions.
NASA Astrophysics Data System (ADS)
Bena, Iosif; Heurtier, Lucien; Puhm, Andrea
2016-05-01
It was argued in [1] that the five-dimensional near-horizon extremal Kerr (NHEK) geometry can be embedded in String Theory as the infrared region of an infinite family of non-supersymmetric geometries that have D1, D5, momentum and KK monopole charges. We show that there exists a method to embed these geometries into asymptotically- {AdS}_3× {S}^3/{{Z}}_N solutions, and hence to obtain infinite families of flows whose infrared is NHEK. This indicates that the CFT dual to the NHEK geometry is the IR fixed point of a Renormalization Group flow from a known local UV CFT and opens the door to its explicit construction.
Some characterizations of unique extremality
NASA Astrophysics Data System (ADS)
Yao, Guowu
2008-07-01
In this paper, it is shown that some necessary characteristic conditions for unique extremality obtained by Zhu and Chen are also sufficient and some sufficient ones by them actually imply that the uniquely extremal Beltrami differentials have a constant modulus. In addition, some local properties of uniquely extremal Beltrami differentials are given.
Shadows, currents, and AdS fields
Metsaev, R. R.
2008-11-15
Conformal totally symmetric arbitrary spin currents and shadow fields in flat space-time of dimension greater than or equal to four are studied. A gauge invariant formulation for such currents and shadow fields is developed. Gauge symmetries are realized by involving the Stueckelberg fields. A realization of global conformal boost symmetries is obtained. Gauge invariant differential constraints for currents and shadow fields are obtained. AdS/CFT correspondence for currents and shadow fields and the respective normalizable and non-normalizable solutions of massless totally symmetric arbitrary spin AdS fields are studied. The bulk fields are considered in a modified de Donder gauge that leads to decoupled equations of motion. We demonstrate that leftover on shell gauge symmetries of bulk fields correspond to gauge symmetries of boundary currents and shadow fields, while the modified de Donder gauge conditions for bulk fields correspond to differential constraints for boundary conformal currents and shadow fields. Breaking conformal symmetries, we find interrelations between the gauge invariant formulation of the currents and shadow fields, and the gauge invariant formulation of massive fields.
Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.
Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S
2016-04-01
Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity. PMID:26995641
Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.
Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S
2016-04-01
Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity.
Cyclostationary analysis with logarithmic variance stabilisation
NASA Astrophysics Data System (ADS)
Borghesani, Pietro; Shahriar, Md Rifat
2016-03-01
Second order cyclostationary (CS2) components in vibration or acoustic emission signals are typical symptoms of a wide variety of faults in rotating and alternating mechanical systems. The square envelope spectrum (SES), obtained via Hilbert transform of the original signal, is at the basis of the most common indicators used for detection of CS2 components. It has been shown that the SES is equivalent to an autocorrelation of the signal's discrete Fourier transform, and that CS2 components are a cause of high correlations in the frequency domain of the signal, thus resulting in peaks in the SES. Statistical tests have been proposed to determine if peaks in the SES are likely to belong to a normal variability in the signal or if they are proper symptoms of CS2 components. Despite the need for automated fault recognition and the theoretical soundness of these tests, this approach to machine diagnostics has been mostly neglected in industrial applications. In fact, in a series of experimental applications, even with proper pre-whitening steps, it has been found that healthy machines might produce high spectral correlations and therefore result in a highly biased SES distribution which might cause a series of false positives. In this paper a new envelope spectrum is defined, with the theoretical intent of rendering the hypothesis test variance-free. This newly proposed indicator will prove unbiased in case of multiple CS2 sources of spectral correlation, thus reducing the risk of false alarms.
Automatic variance analysis of multistage care pathways.
Li, Xiang; Liu, Haifeng; Zhang, Shilei; Mei, Jing; Xie, Guotong; Yu, Yiqin; Li, Jing; Lakshmanan, Geetika T
2014-01-01
A care pathway (CP) is a standardized process that consists of multiple care stages, clinical activities and their relations, aimed at ensuring and enhancing the quality of care. However, actual care may deviate from the planned CP, and analysis of these deviations can help clinicians refine the CP and reduce medical errors. In this paper, we propose a CP variance analysis method to automatically identify the deviations between actual patient traces in electronic medical records (EMR) and a multistage CP. As the care stage information is usually unavailable in EMR, we first align every trace with the CP using a hidden Markov model. From the aligned traces, we report three types of deviations for every care stage: additional activities, absent activities and violated constraints, which are identified by using the techniques of temporal logic and binomial tests. The method has been applied to a CP for the management of congestive heart failure and real world EMR, providing meaningful evidence for the further improvement of care quality. PMID:25160280
Correcting an analysis of variance for clustering.
Hedges, Larry V; Rhoads, Christopher H
2011-02-01
A great deal of educational and social data arises from cluster sampling designs where clusters involve schools, classrooms, or communities. A mistake that is sometimes encountered in the analysis of such data is to ignore the effect of clustering and analyse the data as if it were based on a simple random sample. This typically leads to an overstatement of the precision of results and too liberal conclusions about precision and statistical significance of mean differences. This paper gives simple corrections to the test statistics that would be computed in an analysis of variance if clustering were (incorrectly) ignored. The corrections are multiplicative factors depending on the total sample size, the cluster size, and the intraclass correlation structure. For example, the corrected F statistic has Fisher's F distribution with reduced degrees of freedom. The corrected statistic reduces to the F statistic computed by ignoring clustering when the intraclass correlations are zero. It reduces to the F statistic computed using cluster means when the intraclass correlations are unity, and it is in between otherwise. A similar adjustment to the usual statistic for testing a linear contrast among group means is described.
Event Segmentation Ability Uniquely Predicts Event Memory
Sargent, Jesse Q.; Zacks, Jeffrey M.; Hambrick, David Z.; Zacks, Rose T.; Kurby, Christopher A.; Bailey, Heather R.; Eisenberg, Michelle L.; Beck, Taylor M.
2013-01-01
Memory for everyday events plays a central role in tasks of daily living, autobiographical memory, and planning. Event memory depends in part on segmenting ongoing activity into meaningful units. This study examined the relationship between event segmentation and memory in a lifespan sample to answer the following question: Is the ability to segment activity into meaningful events a unique predictor of subsequent memory, or is the relationship between event perception and memory accounted for by general cognitive abilities? Two hundred and eight adults ranging from 20 to 79 years old segmented movies of everyday events and attempted to remember the events afterwards. They also completed psychometric ability tests and tests measuring script knowledge for everyday events. Event segmentation and script knowledge both explained unique variance in event memory above and beyond the psychometric measures, and did so as strongly in older as in younger adults. These results suggest that event segmentation is a basic cognitive mechanism, important for memory across the lifespan. PMID:23942350
Secure ADS-B authentication system and method
NASA Technical Reports Server (NTRS)
Viggiano, Marc J (Inventor); Valovage, Edward M (Inventor); Samuelson, Kenneth B (Inventor); Hall, Dana L (Inventor)
2010-01-01
A secure system for authenticating the identity of ADS-B systems, including: an authenticator, including a unique id generator and a transmitter transmitting the unique id to one or more ADS-B transmitters; one or more ADS-B transmitters, including a receiver receiving the unique id, one or more secure processing stages merging the unique id with the ADS-B transmitter's identification, data and secret key and generating a secure code identification and a transmitter transmitting a response containing the secure code and ADSB transmitter's data to the authenticator; the authenticator including means for independently determining each ADS-B transmitter's secret key, a receiver receiving each ADS-B transmitter's response, one or more secure processing stages merging the unique id, ADS-B transmitter's identification and data and generating a secure code, and comparison processing comparing the authenticator-generated secure code and the ADS-B transmitter-generated secure code and providing an authentication signal based on the comparison result.
Functional analysis of variance for association studies.
Vsevolozhskaya, Olga A; Zaykin, Dmitri V; Greenwood, Mark C; Wei, Changshuai; Lu, Qing
2014-01-01
While progress has been made in identifying common genetic variants associated with human diseases, for most of common complex diseases, the identified genetic variants only account for a small proportion of heritability. Challenges remain in finding additional unknown genetic variants predisposing to complex diseases. With the advance in next-generation sequencing technologies, sequencing studies have become commonplace in genetic research. The ongoing exome-sequencing and whole-genome-sequencing studies generate a massive amount of sequencing variants and allow researchers to comprehensively investigate their role in human diseases. The discovery of new disease-associated variants can be enhanced by utilizing powerful and computationally efficient statistical methods. In this paper, we propose a functional analysis of variance (FANOVA) method for testing an association of sequence variants in a genomic region with a qualitative trait. The FANOVA has a number of advantages: (1) it tests for a joint effect of gene variants, including both common and rare; (2) it fully utilizes linkage disequilibrium and genetic position information; and (3) allows for either protective or risk-increasing causal variants. Through simulations, we show that FANOVA outperform two popularly used methods - SKAT and a previously proposed method based on functional linear models (FLM), - especially if a sample size of a study is small and/or sequence variants have low to moderate effects. We conduct an empirical study by applying three methods (FANOVA, SKAT and FLM) to sequencing data from Dallas Heart Study. While SKAT and FLM respectively detected ANGPTL 4 and ANGPTL 3 associated with obesity, FANOVA was able to identify both genes associated with obesity.
Functional analysis of variance for association studies.
Vsevolozhskaya, Olga A; Zaykin, Dmitri V; Greenwood, Mark C; Wei, Changshuai; Lu, Qing
2014-01-01
While progress has been made in identifying common genetic variants associated with human diseases, for most of common complex diseases, the identified genetic variants only account for a small proportion of heritability. Challenges remain in finding additional unknown genetic variants predisposing to complex diseases. With the advance in next-generation sequencing technologies, sequencing studies have become commonplace in genetic research. The ongoing exome-sequencing and whole-genome-sequencing studies generate a massive amount of sequencing variants and allow researchers to comprehensively investigate their role in human diseases. The discovery of new disease-associated variants can be enhanced by utilizing powerful and computationally efficient statistical methods. In this paper, we propose a functional analysis of variance (FANOVA) method for testing an association of sequence variants in a genomic region with a qualitative trait. The FANOVA has a number of advantages: (1) it tests for a joint effect of gene variants, including both common and rare; (2) it fully utilizes linkage disequilibrium and genetic position information; and (3) allows for either protective or risk-increasing causal variants. Through simulations, we show that FANOVA outperform two popularly used methods - SKAT and a previously proposed method based on functional linear models (FLM), - especially if a sample size of a study is small and/or sequence variants have low to moderate effects. We conduct an empirical study by applying three methods (FANOVA, SKAT and FLM) to sequencing data from Dallas Heart Study. While SKAT and FLM respectively detected ANGPTL 4 and ANGPTL 3 associated with obesity, FANOVA was able to identify both genes associated with obesity. PMID:25244256
Uniqueness Theorem for Black Objects
Rogatko, Marek
2010-06-23
We shall review the current status of uniqueness theorem for black objects in higher dimensional spacetime. At the beginning we consider static charged asymptotically flat spacelike hypersurface with compact interior with both degenerate and non-degenerate components of the event horizon in n-dimensional spacetime. We gave some remarks concerning partial results in proving uniqueness of stationary axisymmetric multidimensional solutions and winding numbers which can uniquely characterize the topology and symmetry structure of black objects.
Estimating the encounter rate variance in distance sampling
Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.
2009-01-01
The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.
NASA Technical Reports Server (NTRS)
Clauson, J.; Heuser, J.
1981-01-01
The Applications Data Service (ADS) is a system based on an electronic data communications network which will permit scientists to share the data stored in data bases at universities and at government and private installations. It is designed to allow users to readily locate and access high quality, timely data from multiple sources. The ADS Pilot program objectives and the current plans for accomplishing those objectives are described.
AdS and Lifshitz scalar hairy black holes in Gauss-Bonnet gravity
NASA Astrophysics Data System (ADS)
Chen, Bin; Fan, Zhong-Ying; Zhu, Lu-Yao
2016-09-01
We consider Gauss-Bonnet (GB) gravity in general dimensions, which is nonminimally coupled to a scalar field. By choosing a scalar potential of the type V (ϕ )=2 Λ0+1/2 m2ϕ2+γ4ϕ4 , we first obtain large classes of scalar hairy black holes with spherical/hyperbolic/planar topologies that are asymptotic to locally anti- de Sitter (AdS) space-times. We derive the first law of black hole thermodynamics using Wald formalism. In particular, for one class of the solutions, the scalar hair forms a thermodynamic conjugate with the graviton and nontrivially contributes to the thermodynamical first law. We observe that except for one class of the planar black holes, all these solutions are constructed at the critical point of GB gravity where there exist unique AdS vacua. In fact, a Lifshitz vacuum is also allowed at the critical point. We then construct many new classes of neutral and charged Lifshitz black hole solutions for an either minimally or nonminimally coupled scalar and derive the thermodynamical first laws. We also obtain new classes of exact dynamical AdS and Lifshitz solutions which describe radiating white holes. The solutions eventually become AdS or Lifshitz vacua at late retarded times. However, for one class of the solutions, the final state is an AdS space-time with a globally naked singularity.
Variance analysis. Part II, The use of computers.
Finkler, S A
1991-09-01
This is the second in a two-part series on variance analysis. In the first article (JONA, July/August 1991), the author discussed flexible budgeting, including the calculation of price, quantity, volume, and acuity variances. In this second article, the author focuses on the use of computers by nurse managers to aid in the process of calculating, understanding, and justifying variances. PMID:1919788
Multiperiod Mean-Variance Portfolio Optimization via Market Cloning
Ankirchner, Stefan; Dermoune, Azzouz
2011-08-15
The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.
Network Structure and Biased Variance Estimation in Respondent Driven Sampling
Verdery, Ashton M.; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J.
2015-01-01
This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927
40 CFR 190.11 - Variances for unusual operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified...
Network Structure and Biased Variance Estimation in Respondent Driven Sampling.
Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J
2015-01-01
This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927
Estimation of Variance Components of Quantitative Traits in Inbred Populations
Abney, Mark; McPeek, Mary Sara; Ober, Carole
2000-01-01
Summary Use of variance-component estimation for mapping of quantitative-trait loci in humans is a subject of great current interest. When only trait values, not genotypic information, are considered, variance-component estimation can also be used to estimate heritability of a quantitative trait. Inbred pedigrees present special challenges for variance-component estimation. First, there are more variance components to be estimated in the inbred case, even for a relatively simple model including additive, dominance, and environmental effects. Second, more identity coefficients need to be calculated from an inbred pedigree in order to perform the estimation, and these are computationally more difficult to obtain in the inbred than in the outbred case. As a result, inbreeding effects have generally been ignored in practice. We describe here the calculation of identity coefficients and estimation of variance components of quantitative traits in large inbred pedigrees, using the example of HDL in the Hutterites. We use a multivariate normal model for the genetic effects, extending the central-limit theorem of Lange to allow for both inbreeding and dominance under the assumptions of our variance-component model. We use simulated examples to give an indication of under what conditions one has the power to detect the additional variance components and to examine their impact on variance-component estimation. We discuss the implications for mapping and heritability estimation by use of variance components in inbred populations. PMID:10677322
Network Structure and Biased Variance Estimation in Respondent Driven Sampling.
Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J
2015-01-01
This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.
The phenotypic variance gradient – a novel concept
Pertoldi, Cino; Bundgaard, Jørgen; Loeschcke, Volker; Barker, James Stuart Flinton
2014-01-01
Evolutionary ecologists commonly use reaction norms, which show the range of phenotypes produced by a set of genotypes exposed to different environments, to quantify the degree of phenotypic variance and the magnitude of plasticity of morphometric and life-history traits. Significant differences among the values of the slopes of the reaction norms are interpreted as significant differences in phenotypic plasticity, whereas significant differences among phenotypic variances (variance or coefficient of variation) are interpreted as differences in the degree of developmental instability or canalization. We highlight some potential problems with this approach to quantifying phenotypic variance and suggest a novel and more informative way to plot reaction norms: namely “a plot of log (variance) on the y-axis versus log (mean) on the x-axis, with a reference line added”. This approach gives an immediate impression of how the degree of phenotypic variance varies across an environmental gradient, taking into account the consequences of the scaling effect of the variance with the mean. The evolutionary implications of the variation in the degree of phenotypic variance, which we call a “phenotypic variance gradient”, are discussed together with its potential interactions with variation in the degree of phenotypic plasticity and canalization. PMID:25540685
Exploring Unique Roles for Psychologists
ERIC Educational Resources Information Center
Ahmed, Mohiuddin; Boisvert, Charles M.
2005-01-01
This paper presents comments on "Psychological Treatments" by D. H. Barlow. Barlow highlighted unique roles that psychologists can play in mental health service delivery by providing psychological treatments--treatments that psychologists would be uniquely qualified to design and deliver. In support of Barlow's position, the authors draw from…
ERIC Educational Resources Information Center
Shipman, Barbara A.
2013-01-01
This article analyzes four questions on the meaning of uniqueness that have contrasting answers in common language versus mathematical language. The investigations stem from a scenario in which students interpreted uniqueness according to a definition from standard English, that is, different from the mathematical meaning, in defining an injective…
Marini, Federico; de Beer, Dalene; Joubert, Elizabeth; Walczak, Beata
2015-07-31
Direct application of popular approaches, e.g., Principal Component Analysis (PCA) or Partial Least Squares (PLS) to chromatographic data originating from a well-designed experimental study including more than one factor is not recommended. In the case of a well-designed experiment involving two or more factors (crossed or nested), data are usually decomposed into the contributions associated with the studied factors (and with their interactions), and the individual effect matrices are then analyzed using, e.g., PCA, as in the case of ASCA (analysis of variance combined with simultaneous component analysis). As an alternative to the ASCA method, we propose the application of PLS followed by target projection (TP), which allows a one-factor representation of the model for each column in the design dummy matrix. PLS application follows after proper deflation of the experimental matrix, i.e., to what are called the residuals under the reduced ANOVA model. The proposed approach (ANOVA-TP) is well suited for the study of designed chromatographic data of complex samples. It allows testing of statistical significance of the studied effects, 'biomarker' identification, and enables straightforward visualization and accurate estimation of between- and within-class variance. The proposed approach has been successfully applied to a case study aimed at evaluating the effect of pasteurization on the concentrations of various phenolic constituents of rooibos tea of different quality grades and its outcomes have been compared to those of ASCA.
ERIC Educational Resources Information Center
Castellanos-Ryan, Natalie; Conrod, Patricia J.
2011-01-01
Externalising behaviours such as substance misuse (SM) and conduct disorder (CD) symptoms highly co-ocurr in adolescence. While disinhibited personality traits have been consistently linked to externalising behaviours there is evidence that these traits may relate differentially to SM and CD. The current study aimed to assess whether this was the…
29 CFR 1904.38 - Variances from the recordkeeping rule.
Code of Federal Regulations, 2010 CFR
2010-07-01
... process your variance petition. (i) The Assistant Secretary will offer your employees and their authorized... the facts or conduct that may warrant revocation of your variance; and (ii) Provide you, your employees, and authorized employee representatives with an opportunity to participate in the...
Characterizing the evolution of genetic variance using genetic covariance tensors.
Hine, Emma; Chenoweth, Stephen F; Rundle, Howard D; Blows, Mark W
2009-06-12
Determining how genetic variance changes under selection in natural populations has proved to be a very resilient problem in evolutionary genetics. In the same way that understanding the availability of genetic variance within populations requires the simultaneous consideration of genetic variance in sets of functionally related traits, determining how genetic variance changes under selection in natural populations will require ascertaining how genetic variance-covariance (G) matrices evolve. Here, we develop a geometric framework using higher-order tensors, which enables the empirical characterization of how G matrices have diverged among populations. We then show how divergence among populations in genetic covariance structure can then be associated with divergence in selection acting on those traits using key equations from evolutionary theory. Using estimates of G matrices of eight male sexually selected traits from nine geographical populations of Drosophila serrata, we show that much of the divergence in genetic variance occurred in a single trait combination, a conclusion that could not have been reached by examining variation among the individual elements of the nine G matrices. Divergence in G was primarily in the direction of the major axes of genetic variance within populations, suggesting that genetic drift may be a major cause of divergence in genetic variance among these populations.
An Analysis of Variance Framework for Matrix Sampling.
ERIC Educational Resources Information Center
Sirotnik, Kenneth
Significant cost savings can be achieved with the use of matrix sampling in estimating population parameters from psychometric data. The statistical design is intuitively simple, using the framework of the two-way classification analysis of variance technique. For example, the mean and variance are derived from the performance of a certain grade…
A Study of Variance Estimation Methods. Working Paper Series.
ERIC Educational Resources Information Center
Zhang, Fan; Weng, Stanley; Salvucci, Sameena; Hu, Ming-xiu
This working paper contains reports of five studies of variance estimation methods. The first, An Empirical Study of Poststratified Estimator, by Fan Zhang uses data from the National Household Education Survey to illustrate use of poststratified estimation. The second paper, BRR Variance Estimation Using BPLX Hadamard Procedure, by Stanley Weng…
Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances
ERIC Educational Resources Information Center
Jan, Show-Li; Shieh, Gwowen
2014-01-01
The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…
Conceptual Complexity and the Bias/Variance Tradeoff
ERIC Educational Resources Information Center
Briscoe, Erica; Feldman, Jacob
2011-01-01
In this paper we propose that the conventional dichotomy between exemplar-based and prototype-based models of concept learning is helpfully viewed as an instance of what is known in the statistical learning literature as the "bias/variance tradeoff". The bias/variance tradeoff can be thought of as a sliding scale that modulates how closely any…
29 CFR 1905.5 - Effect of variances.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 5 2010-07-01 2010-07-01 false Effect of variances. 1905.5 Section 1905.5 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RULES OF PRACTICE FOR VARIANCES, LIMITATIONS, VARIATIONS, TOLERANCES, AND EXEMPTIONS UNDER THE...
Code of Federal Regulations, 2010 CFR
2010-07-01
... same circumstances in which variances may be granted under sections 6(b)(6)(A) or 6(d) of the Williams... the Williams-Steiger Occupational Safety and Health Act of 1970, and any variance from a standard... the Williams-Steiger Occupational Safety and Health Act of 1970. In accordance with the...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 36 Parks, Forests, and Public Property 1 2012-07-01 2012-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 36 Parks, Forests, and Public Property 1 2013-07-01 2013-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 36 Parks, Forests, and Public Property 1 2011-07-01 2011-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 36 Parks, Forests, and Public Property 1 2014-07-01 2014-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
Evaluation of Mean and Variance Integrals without Integration
ERIC Educational Resources Information Center
Joarder, A. H.; Omar, M. H.
2007-01-01
The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…
Productive Failure in Learning the Concept of Variance
ERIC Educational Resources Information Center
Kapur, Manu
2012-01-01
In a study with ninth-grade mathematics students on learning the concept of variance, students experienced either direct instruction (DI) or productive failure (PF), wherein they were first asked to generate a quantitative index for variance without any guidance before receiving DI on the concept. Whereas DI students relied only on the canonical…
A Computer Program to Determine Reliability Using Analysis of Variance
ERIC Educational Resources Information Center
Burns, Edward
1976-01-01
A computer program, written in Fortran IV, is described which assesses reliability by using analysis of variance. It produces a complete analysis of variance table in addition to reliability coefficients for unadjusted and adjusted data as well as the intraclass correlation for m subjects and n items. (Author)
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 24 2013-07-01 2013-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....
Relating the Hadamard Variance to MCS Kalman Filter Clock Estimation
NASA Technical Reports Server (NTRS)
Hutsell, Steven T.
1996-01-01
The Global Positioning System (GPS) Master Control Station (MCS) currently makes significant use of the Allan Variance. This two-sample variance equation has proven excellent as a handy, understandable tool, both for time domain analysis of GPS cesium frequency standards, and for fine tuning the MCS's state estimation of these atomic clocks. The Allan Variance does not explicitly converge for the nose types of alpha less than or equal to minus 3 and can be greatly affected by frequency drift. Because GPS rubidium frequency standards exhibit non-trivial aging and aging noise characteristics, the basic Allan Variance analysis must be augmented in order to (a) compensate for a dynamic frequency drift, and (b) characterize two additional noise types, specifically alpha = minus 3, and alpha = minus 4. As the GPS program progresses, we will utilize a larger percentage of rubidium frequency standards than ever before. Hence, GPS rubidium clock characterization will require more attention than ever before. The three sample variance, commonly referred to as a renormalized Hadamard Variance, is unaffected by linear frequency drift, converges for alpha is greater than minus 5, and thus has utility for modeling noise in GPS rubidium frequency standards. This paper demonstrates the potential of Hadamard Variance analysis in GPS operations, and presents an equation that relates the Hadamard Variance to the MCS's Kalman filter process noises.
A Variance Explanation Paradox: When a Little Is a Lot.
ERIC Educational Resources Information Center
Abelson, Robert P.
1985-01-01
Argues that percent variance explanation is a misleading index of the influence of systematic factors in cases where there are processes by which individually tiny influences cumulate to produce meaningful outcomes. An example is the computation of percentage of variance in batting performance among major league baseball players. (Author/CB)
On the Endogeneity of the Mean-Variance Efficient Frontier.
ERIC Educational Resources Information Center
Somerville, R. A.; O'Connell, Paul G. J.
2002-01-01
Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…
Assured Information Sharing for Ad-Hoc Collaboration
ERIC Educational Resources Information Center
Jin, Jing
2009-01-01
Collaborative information sharing tends to be highly dynamic and often ad hoc among organizations. The dynamic natures and sharing patterns in ad-hoc collaboration impose a need for a comprehensive and flexible approach to reflecting and coping with the unique access control requirements associated with the environment. This dissertation…
Utility functions predict variance and skewness risk preferences in monkeys.
Genest, Wilfried; Stauffer, William R; Schultz, Wolfram
2016-07-26
Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals' preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals' preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys' choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences. PMID:27402743
Utility functions predict variance and skewness risk preferences in monkeys
Genest, Wilfried; Stauffer, William R.; Schultz, Wolfram
2016-01-01
Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals’ preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals’ preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys’ choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences. PMID:27402743
Utility functions predict variance and skewness risk preferences in monkeys.
Genest, Wilfried; Stauffer, William R; Schultz, Wolfram
2016-07-26
Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals' preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals' preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys' choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences.
Variance After-Effects Distort Risk Perception in Humans.
Payzan-LeNestour, Elise; Balleine, Bernard W; Berrada, Tony; Pearson, Joel
2016-06-01
In many contexts, decision-making requires an accurate representation of outcome variance-otherwise known as "risk" in economics. Conventional economic theory assumes this representation to be perfect, thereby focusing on risk preferences rather than risk perception per se [1-3] (but see [4]). However, humans often misrepresent their physical environment. Perhaps the most striking of such misrepresentations are the many well-known sensory after-effects, which most commonly involve visual properties, such as color, contrast, size, and motion. For example, viewing downward motion of a waterfall induces the anomalous biased experience of upward motion during subsequent viewing of static rocks to the side [5]. Given that after-effects are pervasive, occurring across a wide range of time horizons [6] and stimulus dimensions (including properties such as face perception [7, 8], gender [9], and numerosity [10]), and that some evidence exists that neurons show adaptation to variance in the sole visual feature of motion [11], we were interested in assessing whether after-effects distort variance perception in humans. We found that perceived variance is decreased after prolonged exposure to high variance and increased after exposure to low variance within a number of different visual representations of variance. We demonstrate these after-effects occur across very different visual representations of variance, suggesting that these effects are not sensory, but operate at a high (cognitive) level of information processing. These results suggest, therefore, that variance constitutes an independent cognitive property and that prolonged exposure to extreme variance distorts risk perception-a fundamental challenge for economic theory and practice. PMID:27161500
Uniqueness of the momentum map
NASA Astrophysics Data System (ADS)
Esposito, Chiara; Nest, Ryszard
2016-08-01
We give a detailed discussion of existence and uniqueness of the momentum map associated to Poisson Lie actions, which was defined by Lu. We introduce a weaker notion of momentum map, called infinitesimal momentum map, which is defined on one-forms and we analyze its integrability to the Lu's momentum map. Finally, the uniqueness of the Lu's momentum map is studied by describing, explicitly, the tangent space to the space of momentum maps.
ERIC Educational Resources Information Center
Richards, Andrew
2015-01-01
Two quantitative measures of school performance are currently used, the average points score (APS) at Key Stage 2 and value-added (VA), which measures the rate of academic improvement between Key Stage 1 and 2. These figures are used by parents and the Office for Standards in Education to make judgements and comparisons. However, simple…
NASA Astrophysics Data System (ADS)
Accomazzi, Alberto; Henneken, E.; Grant, C. S.; Kurtz, M. J.; Di Milia, G.; Luker, J.; Thompson, D. M.; Bohlen, E.; Murray, S. S.
2011-05-01
ADS Labs is a platform that ADS is introducing in order to test and receive feedback from the community on new technologies and prototype services. Currently, ADS Labs features a new interface for abstract searches, faceted filtering of results, visualization of co-authorship networks, article-level recommendations, and a full-text search service. The streamlined abstract search interface provides a simple, one-box search with options for ranking results based on a paper relevancy, freshness, number of citations, and downloads. In addition, it provides advanced rankings based on collaborative filtering techniques. The faceted filtering interface allows users to narrow search results based on a particular property or set of properties ("facets"), allowing users to manage large lists and explore the relationship between them. For any set or sub-set of records, the co-authorship network can be visualized in an interactive way, offering a view of the distribution of contributors and their inter-relationships. This provides an immediate way to detect groups and collaborations involved in a particular research field. For a majority of papers in Astronomy, our new interface will provide a list of related articles of potential interest. The recommendations are based on a number of factors, including text similarity, citations, and co-readership information. The new full-text search interface allows users to find all instances of particular words or phrases in the body of the articles in our full-text archive. This includes all of the scanned literature in ADS as well as a select portion of the current astronomical literature, including ApJ, ApJS, AJ, MNRAS, PASP, A&A, and soon additional content from Springer journals. Fulltext search results include a list of the matching papers as well as a list of "snippets" of text highlighting the context in which the search terms were found. ADS Labs is available at http://adslabs.org
Code of Federal Regulations, 2011 CFR
2011-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260.33 Section 260.33 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES... from classification as a solid waste, for variances to be classified as a boiler, or for...
Code of Federal Regulations, 2010 CFR
2010-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260.33 Section 260.33 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES... from classification as a solid waste, for variances to be classified as a boiler, or for...
Code of Federal Regulations, 2014 CFR
2014-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
Code of Federal Regulations, 2012 CFR
2012-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
Code of Federal Regulations, 2013 CFR
2013-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
The Placenta Harbors a Unique Microbiome
Aagaard, Kjersti; Ma, Jun; Antony, Kathleen M.; Ganu, Radhika; Petrosino, Joseph; Versalovic, James
2016-01-01
Humans and their microbiomes have coevolved as a physiologic community composed of distinct body site niches with metabolic and antigenic diversity. The placental microbiome has not been robustly interrogated, despite recent demonstrations of intracellular bacteria with diverse metabolic and immune regulatory functions. A population-based cohort of placental specimens collected under sterile conditions from 320 subjects with extensive clinical data was established for comparative 16S ribosomal DNA–based and whole-genome shotgun (WGS) metagenomic studies. Identified taxa and their gene carriage patterns were compared to other human body site niches, including the oral, skin, airway (nasal), vaginal, and gut microbiomes from nonpregnant controls. We characterized a unique placental microbiome niche, composed of nonpathogenic commensal microbiota from the Firmicutes, Tenericutes, Proteobacteria, Bacteroidetes, and Fusobacteria phyla. In aggregate, the placental microbiome profiles were most akin (Bray-Curtis dissimilarity <0.3) to the human oral microbiome. 16S-based operational taxonomic unit analyses revealed associations of the placental microbiome with a remote history of antenatal infection (permutational multivariate analysis of variance, P = 0.006), such as urinary tract infection in the first trimester, as well as with preterm birth <37 weeks (P = 0.001). PMID:24848255
Wavelet variance analysis for random fields on a regular lattice.
Mondal, Debashis; Percival, Donald B
2012-02-01
There has been considerable recent interest in using wavelets to analyze time series and images that can be regarded as realizations of certain 1-D and 2-D stochastic processes on a regular lattice. Wavelets give rise to the concept of the wavelet variance (or wavelet power spectrum), which decomposes the variance of a stochastic process on a scale-by-scale basis. The wavelet variance has been applied to a variety of time series, and a statistical theory for estimators of this variance has been developed. While there have been applications of the wavelet variance in the 2-D context (in particular, in works by Unser in 1995 on wavelet-based texture analysis for images and by Lark and Webster in 2004 on analysis of soil properties), a formal statistical theory for such analysis has been lacking. In this paper, we develop the statistical theory by generalizing and extending some of the approaches developed for time series, thus leading to a large-sample theory for estimators of 2-D wavelet variances. We apply our theory to simulated data from Gaussian random fields with exponential covariances and from fractional Brownian surfaces. We demonstrate that the wavelet variance is potentially useful for texture discrimination. We also use our methodology to analyze images of four types of clouds observed over the southeast Pacific Ocean.
Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation
NASA Technical Reports Server (NTRS)
Wu, Dong L.; Eckermann, Stephen D.
2008-01-01
The gravity wave (GW)-resolving capabilities of 118-GHz saturated thermal radiances acquired throughout the stratosphere by the Microwave Limb Sounder (MLS) on the Aura satellite are investigated and initial results presented. Because the saturated (optically thick) radiances resolve GW perturbations from a given altitude at different horizontal locations, variances are evaluated at 12 pressure altitudes between 21 and 51 km using the 40 saturated radiances found at the bottom of each limb scan. Forward modeling simulations show that these variances are controlled mostly by GWs with vertical wavelengths z 5 km and horizontal along-track wavelengths of y 100-200 km. The tilted cigar-shaped three-dimensional weighting functions yield highly selective responses to GWs of high intrinsic frequency that propagate toward the instrument. The latter property is used to infer the net meridional component of GW propagation by differencing the variances acquired from ascending (A) and descending (D) orbits. Because of improved vertical resolution and sensitivity, Aura MLS GW variances are 5?8 times larger than those from the Upper Atmosphere Research Satellite (UARS) MLS. Like UARS MLS variances, monthly-mean Aura MLS variances in January and July 2005 are enhanced when local background wind speeds are large, due largely to GW visibility effects. Zonal asymmetries in variance maps reveal enhanced GW activity at high latitudes due to forcing by flow over major mountain ranges and at tropical and subtropical latitudes due to enhanced deep convective generation as inferred from contemporaneous MLS cloud-ice data. At 21-28-km altitude (heights not measured by the UARS MLS), GW variance in the tropics is systematically enhanced and shows clear variations with the phase of the quasi-biennial oscillation, in general agreement with GW temperature variances derived from radiosonde, rocketsonde, and limb-scan vertical profiles.
Comparison of multiplicative heterogeneous variance adjustment models for genetic evaluations.
Márkus, Sz; Mäntysaari, E A; Strandén, I; Eriksson, J-Å; Lidauer, M H
2014-06-01
Two heterogeneous variance adjustment methods and two variance models were compared in a simulation study. The method used for heterogeneous variance adjustment in the Nordic test-day model, which is a multiplicative method based on Meuwissen (J. Dairy Sci., 79, 1996, 310), was compared with a restricted multiplicative method where the fixed effects were not scaled. Both methods were tested with two different variance models, one with a herd-year and the other with a herd-year-month random effect. The simulation study was built on two field data sets from Swedish Red dairy cattle herds. For both data sets, 200 herds with test-day observations over a 12-year period were sampled. For one data set, herds were sampled randomly, while for the other, each herd was required to have at least 10 first-calving cows per year. The simulations supported the applicability of both methods and models, but the multiplicative mixed model was more sensitive in the case of small strata sizes. Estimation of variance components for the variance models resulted in different parameter estimates, depending on the applied heterogeneous variance adjustment method and variance model combination. Our analyses showed that the assumption of a first-order autoregressive correlation structure between random-effect levels is reasonable when within-herd heterogeneity is modelled by year classes, but less appropriate for within-herd heterogeneity by month classes. Of the studied alternatives, the multiplicative method and a variance model with a random herd-year effect were found most suitable for the Nordic test-day model for dairy cattle evaluation.
Mesoscale Gravity Wave Variances from AMSU-A Radiances
NASA Technical Reports Server (NTRS)
Wu, Dong L.
2004-01-01
A variance analysis technique is developed here to extract gravity wave (GW) induced temperature fluctuations from NOAA AMSU-A (Advanced Microwave Sounding Unit-A) radiance measurements. By carefully removing the instrument/measurement noise, the algorithm can produce reliable GW variances with the minimum detectable value as small as 0.1 K2. Preliminary analyses with AMSU-A data show GW variance maps in the stratosphere have very similar distributions to those found with the UARS MLS (Upper Atmosphere Research Satellite Microwave Limb Sounder). However, the AMSU-A offers better horizontal and temporal resolution for observing regional GW variability, such as activity over sub-Antarctic islands.
Non-Uniqueness of Atmospheric Modeling
NASA Astrophysics Data System (ADS)
Judge, Philip G.; McIntosh, Scott W.
1999-12-01
We focus on the deceptively simple question: how can we use the emitted photons to extract meaningful information on the transition region and corona? Using examples, we conclude that the only safe way to proceed is through forward models. In this way, inherent non-uniqueness is handled by adding information through explicit physical assumptions and restrictions made in the modeling procedure. The alternative, `inverse' approaches, including (as a restricted subset) many standard '`spectral diagnostic techniques', rely on more subjective choices that have, as yet, no clear theoretical support. Emphasis is on the solar transition region, but necessarily discussing the corona, and with implications for more general problems concerning the use of photons to diagnose plasma conditions.
Vance Tartar: a unique biologist.
Frankel, J; Whiteley, A H
1993-01-01
Vance Tartar (1911-1991) has made major discoveries concerning morphogenesis, patterning, and nucleocytoplasmic relations in the giant ciliate Stentor coeruleus, mostly by means of hand-grafting using glass microneedles. This article provides a chronological account of the major events of Vance Tartar's life, a brief description of some of his major scientific achievements, and a discussion of his distinctive personality and multifaceted interests. It concludes with a consideration of how his unique style of life and work contributed to his equally unique scientific contributions. PMID:8457795
The liberal illusion of uniqueness.
Stern, Chadly; West, Tessa V; Schmitt, Peter G
2014-01-01
In two studies, we demonstrated that liberals underestimate their similarity to other liberals (i.e., display truly false uniqueness), whereas moderates and conservatives overestimate their similarity to other moderates and conservatives (i.e., display truly false consensus; Studies 1 and 2). We further demonstrated that a fundamental difference between liberals and conservatives in the motivation to feel unique explains this ideological distinction in the accuracy of estimating similarity (Study 2). Implications of the accuracy of consensus estimates for mobilizing liberal and conservative political movements are discussed. PMID:24247730
NASA Astrophysics Data System (ADS)
Goodman, Alyssa
We will create the first interactive sky map of astronomers' understanding of the Universe over time. We will accomplish this goal by turning the NASA Astrophysics Data System (ADS), widely known for its unrivaled value as a literature resource, into a data resource. GIS and GPS systems have made it commonplace to see and explore information about goings-on on Earth in the context of maps and timelines. Our proposal shows an example of a program that lets a user explore which countries have been mentioned in the New York Times, on what dates, and in what kinds of articles. By analogy, the goal of our project is to enable this kind of exploration-on the sky-for the full corpus of astrophysical literature available through ADS. Our group's expertise and collaborations uniquely position us to create this interactive sky map of the literature, which we call the "ADS All-Sky Survey." To create this survey, here are the principal steps we need to follow. First, by analogy to "geotagging," we will "astrotag," the ADS literature. Many "astrotags" effectively already exist, thanks to curation efforts at both CDS and NED. These efforts have created links to "source" positions on the sky associated with each of the millions of articles in the ADS. Our collaboration with ADS and CDS will let us automatically extract astrotags for all existing and future ADS holdings. The new ADS Labs, which our group helps to develop, includes the ability for researchers to filter article search results using a variety of "facets" (e.g. sources, keywords, authors, observatories, etc.). Using only extracted astrotags and facets, we can create functionality like what is described in the Times example above: we can offer a map of the density of positions' "mentions" on the sky, filterable by the properties of those mentions. Using this map, researchers will be able to interactively, visually, discover what regions have been studied for what reasons, at what times, and by whom. Second, where
Two Virasoro symmetries in stringy warped AdS3
NASA Astrophysics Data System (ADS)
Compère, Geoffrey; Guica, Monica; Rodriguez, Maria J.
2014-12-01
We study three-dimensional consistent truncations of type IIB supergravity which admit warped AdS3 solutions. These theories contain subsectors that have no bulk dynamics. We show that the symplectic form for these theories, when restricted to the non-dynamical subsectors, equals the symplectic form for pure Einstein gravity in AdS3. Consequently, for each consistent choice of boundary conditions in AdS3, we can define a consistent phase space in warped AdS3 with identical conserved charges. This way, we easily obtain a Virasoro × Virasoro asymptotic symmetry algebra in warped AdS3; two different types of Virasoro × Kač-Moody symmetries are also consistent alternatives.
Variance Function Partially Linear Single-Index Models1
LIAN, HENG; LIANG, HUA; CARROLL, RAYMOND J.
2014-01-01
We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function. PMID:25642139
40 CFR 190.11 - Variances for unusual operations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified in... interest, and (b) Information is promptly made a matter of public record delineating the nature of...
40 CFR 59.509 - Can I get a variance?
Code of Federal Regulations, 2011 CFR
2011-07-01
...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...
40 CFR 59.509 - Can I get a variance?
Code of Federal Regulations, 2010 CFR
2010-07-01
...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...
40 CFR 59.509 - Can I get a variance?
Code of Federal Regulations, 2013 CFR
2013-07-01
...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...
40 CFR 59.509 - Can I get a variance?
Code of Federal Regulations, 2012 CFR
2012-07-01
...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...
40 CFR 59.509 - Can I get a variance?
Code of Federal Regulations, 2014 CFR
2014-07-01
...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...
A multicomb variance reduction scheme for Monte Carlo semiconductor simulators
Gray, M.G.; Booth, T.E.; Kwan, T.J.T.; Snell, C.M.
1998-04-01
The authors adapt a multicomb variance reduction technique used in neutral particle transport to Monte Carlo microelectronic device modeling. They implement the method in a two-dimensional (2-D) MOSFET device simulator and demonstrate its effectiveness in the study of hot electron effects. The simulations show that the statistical variance of hot electrons is significantly reduced with minimal computational cost. The method is efficient, versatile, and easy to implement in existing device simulators.
... Multiple Health Problems Prevention Join our e-newsletter! Aging & Health A to Z COPD Unique to Older Adults This section provides information ... not a weakness or a normal part of aging. Most people feel better with ... help you can, so that your COPD does not prevent you from living your life ...
Milton: A New, Unique Pallasite
NASA Technical Reports Server (NTRS)
Jones, R. H.; Wasson, J. T.; Larson, T.; Sharp, Z. D.
2003-01-01
The Milton pallasite was found in Missouri, U.S.A. in October, 2000. It consists of a single stone that originally weighed approximately 2040 g. The chemistry of the olivine and metal phases, plus the oxygen isotope ratios of the olivines, differ significantly from other pallasites, making Milton unique. Unfortunately, the meteorite is heavily fractured and weathered.
Rufus Choate: A Unique Orator.
ERIC Educational Resources Information Center
Markham, Reed
Rufus Choate, a Massachusetts lawyer and orator, has been described as a "unique and romantic phenomenon" in America's history. Born in 1799 in Essex, Massachusetts, Choate graduated from Dartmouth College and attended Harvard Law School. Choate's goal was to be the top in his profession. Daniel Webster was Choate's hero. Choate became well…
Uniquely identifying wheat plant structures
Technology Transfer Automated Retrieval System (TEKTRAN)
Uniquely naming wheat (Triticum aestivum L. em Thell) plant parts is useful for communicating plant development research and the effects of environmental stresses on normal wheat development. Over the past 30+ years, several naming systems have been proposed for wheat shoot, leaf, spike, spikelet, ...
The evolution and consequences of sex-specific reproductive variance.
Mullon, Charles; Reuter, Max; Lehmann, Laurent
2014-01-01
Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction.
Variance estimation for systematic designs in spatial surveys.
Fewster, R M
2011-12-01
In spatial surveys for estimating the density of objects in a survey region, systematic designs will generally yield lower variance than random designs. However, estimating the systematic variance is well known to be a difficult problem. Existing methods tend to overestimate the variance, so although the variance is genuinely reduced, it is over-reported, and the gain from the more efficient design is lost. The current approaches to estimating a systematic variance for spatial surveys are to approximate the systematic design by a random design, or approximate it by a stratified design. Previous work has shown that approximation by a random design can perform very poorly, while approximation by a stratified design is an improvement but can still be severely biased in some situations. We develop a new estimator based on modeling the encounter process over space. The new "striplet" estimator has negligible bias and excellent precision in a wide range of simulation scenarios, including strip-sampling, distance-sampling, and quadrat-sampling surveys, and including populations that are highly trended or have strong aggregation of objects. We apply the new estimator to survey data for the spotted hyena (Crocuta crocuta) in the Serengeti National Park, Tanzania, and find that the reported coefficient of variation for estimated density is 20% using approximation by a random design, 17% using approximation by a stratified design, and 11% using the new striplet estimator. This large reduction in reported variance is verified by simulation. PMID:21534940
On variance estimate for covariate adjustment by propensity score analysis.
Zou, Baiming; Zou, Fei; Shuster, Jonathan J; Tighe, Patrick J; Koch, Gary G; Zhou, Haibo
2016-09-10
Propensity score (PS) methods have been used extensively to adjust for confounding factors in the statistical analysis of observational data in comparative effectiveness research. There are four major PS-based adjustment approaches: PS matching, PS stratification, covariate adjustment by PS, and PS-based inverse probability weighting. Though covariate adjustment by PS is one of the most frequently used PS-based methods in clinical research, the conventional variance estimation of the treatment effects estimate under covariate adjustment by PS is biased. As Stampf et al. have shown, this bias in variance estimation is likely to lead to invalid statistical inference and could result in erroneous public health conclusions (e.g., food and drug safety and adverse events surveillance). To address this issue, we propose a two-stage analytic procedure to develop a valid variance estimator for the covariate adjustment by PS analysis strategy. We also carry out a simple empirical bootstrap resampling scheme. Both proposed procedures are implemented in an R function for public use. Extensive simulation results demonstrate the bias in the conventional variance estimator and show that both proposed variance estimators offer valid estimates for the true variance, and they are robust to complex confounding structures. The proposed methods are illustrated for a post-surgery pain study. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26999553
Variance estimation for systematic designs in spatial surveys.
Fewster, R M
2011-12-01
In spatial surveys for estimating the density of objects in a survey region, systematic designs will generally yield lower variance than random designs. However, estimating the systematic variance is well known to be a difficult problem. Existing methods tend to overestimate the variance, so although the variance is genuinely reduced, it is over-reported, and the gain from the more efficient design is lost. The current approaches to estimating a systematic variance for spatial surveys are to approximate the systematic design by a random design, or approximate it by a stratified design. Previous work has shown that approximation by a random design can perform very poorly, while approximation by a stratified design is an improvement but can still be severely biased in some situations. We develop a new estimator based on modeling the encounter process over space. The new "striplet" estimator has negligible bias and excellent precision in a wide range of simulation scenarios, including strip-sampling, distance-sampling, and quadrat-sampling surveys, and including populations that are highly trended or have strong aggregation of objects. We apply the new estimator to survey data for the spotted hyena (Crocuta crocuta) in the Serengeti National Park, Tanzania, and find that the reported coefficient of variation for estimated density is 20% using approximation by a random design, 17% using approximation by a stratified design, and 11% using the new striplet estimator. This large reduction in reported variance is verified by simulation.
Variance and covariance estimates for weaning weight of Senepol cattle.
Wright, D W; Johnson, Z B; Brown, C J; Wildeus, S
1991-10-01
Variance and covariance components were estimated for weaning weight from Senepol field data for use in the reduced animal model for a maternally influenced trait. The 4,634 weaning records were used to evaluate 113 sires and 1,406 dams on the island of St. Croix. Estimates of direct additive genetic variance (sigma 2A), maternal additive genetic variance (sigma 2M), covariance between direct and maternal additive genetic effects (sigma AM), permanent maternal environmental variance (sigma 2PE), and residual variance (sigma 2 epsilon) were calculated by equating variances estimated from a sire-dam model and a sire-maternal grandsire model, with and without the inverse of the numerator relationship matrix (A-1), to their expectations. Estimates were sigma 2A, 139.05 and 138.14 kg2; sigma 2M, 307.04 and 288.90 kg2; sigma AM, -117.57 and -103.76 kg2; sigma 2PE, -258.35 and -243.40 kg2; and sigma 2 epsilon, 588.18 and 577.72 kg2 with and without A-1, respectively. Heritability estimates for direct additive (h2A) were .211 and .210 with and without A-1, respectively. Heritability estimates for maternal additive (h2M) were .47 and .44 with and without A-1, respectively. Correlations between direct and maternal (IAM) effects were -.57 and -.52 with and without A-1, respectively. PMID:1778806
Analytic variance estimates of Swank and Fano factors
Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank
2014-07-15
Purpose: Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. Methods: The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data from a Monte Carlo simulation of an indirect x-ray imaging detector. Results: The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. Conclusions: The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.
Evans, Nick
2016-09-12
Essential facts Leading Change, Adding Value is NHS England's new nursing and midwifery framework. It is designed to build on Compassion in Practice (CiP), which was published 3 years ago and set out the 6Cs: compassion, care, commitment, courage, competence and communication. CiP established the values at the heart of nursing and midwifery, while the new framework sets out how staff can help transform the health and care sectors to meet the aims of the NHS England's Five Year Forward View. PMID:27615573
Vanderheyden, Yoachim; Broeckhoven, Ken; Desmet, Gert
2014-10-17
Different automatic peak integration methods have been reviewed and compared for their ability to accurately determine the variance of the very narrow and very fast eluting peaks encountered when measuring the instrument band broadening of today's low dispersion liquid chromatography instruments. Using fully maximized injection concentrations to work at the highest possible signal-to-noise ratio's (SNR), the best results were obtained with the so-called variance profile analysis method. This is an extension (supplemented with a user-independent read-out algorithm) of a recently proposed method which calculates the peak variance value for any possible value of the peak end time, providing a curve containing all the possible variance values and theoretically levelling off to the (best possible estimate of the) true variance. Despite the use of maximal injection concentrations (leading to SNRs over 10,000), the peak variance errors were of the order of some 10-20%, mostly depending on the peak tail characteristics. The accuracy could however be significantly increased (to an error level below 0.5-2%) by averaging over 10-15 subsequent measurements, or by first adding the peak profiles of 10-15 subsequent runs and then analyzing this summed peak. There also appears to be an optimal detector intermediate frequency, with the higher frequencies suffering from their poorer signal-to-noise-ratio and with the smaller detector frequencies suffering from a limited number of data points. When the SNR drops below 1000, an accurate determination of the true variance of extra-column peaks of modern instruments no longer seems to be possible.
McGuigan, Katrina; Blows, Mark W
2010-07-01
Genetic covariation among multiple traits will bias the direction of evolution. Although a trait's phenotypic context is crucial for understanding evolutionary constraints, the evolutionary potential of one (focal) trait, rather than the whole phenotype, is often of interest. The extent to which a focal trait can evolve independently depends on how much of the genetic variance in that trait is unique. Here, we present a hypothesis-testing framework for estimating the genetic variance in a focal trait that is independent of variance in other traits. We illustrate our analytical approach using two Drosophila bunnanda trait sets: a contact pheromone system comprised of cuticular hydrocarbons (CHCs), and wing shape, characterized by relative warps of vein position coordinates. Only 9% of the additive genetic variation in CHCs was trait specific, suggesting individual traits are unlikely to evolve independently. In contrast, most (72%) of the additive genetic variance in wing shape was trait specific, suggesting relative warp representations of wing shape could evolve independently. The identification of genetic variance in focal traits that is independent of other traits provides a way of studying the evolvability of individual traits within the broader context of the multivariate phenotype.
Variances in the etiology of drug use among ethnic groups of adolescents.
Beauvais, Frederick; Oetting, E. R.
2002-01-01
OBJECTIVE: This article reviews drug use trends among ethnic groups of adolescents. It identifies similarities and differences in general, and culturally specific variables in particular, that may account for the differences in drug use rates and the consequences of drug use. METHODS: The authors review trends in drug use among minority and nonminority adolescents over the past 25 years and propose an explanatory model for understanding the factors that affect adolescent drug use. Sources of variance examined include factors common to all adolescents, factors unique to certain ethnic groups, temporal influences, location and demographic variables, developmental and socialization factors, and individual characteristics. RESULTS: Most of the variance in adolescent drug use is due to factors that are common across ethnic groups. CONCLUSION: This finding should not overshadow the importance of addressing ethnocultural issues in designing prevention or treatment interventions, however. Although the major factors leading to drug use may be common across ethnic groups, unique elements within a culture can be used effectively in interventions. Interventions also need to address culturally specific issues in order to gain acceptance within a community. PMID:12435823
Nonlinear realization of local symmetries of AdS space
Clark, T.E.; Love, S.T.; Nitta, Muneto; Veldhuis, T. ter
2005-10-15
Coset methods are used to construct the action describing the dynamics associated with the spontaneous breaking of the local symmetries of AdS{sub d+1} space due to the embedding of an AdS{sub d} brane. The resulting action is an SO(2,d) invariant AdS form of the Einstein-Hilbert action, which in addition to the AdS{sub d} gravitational vielbein, also includes a massive vector field localized on the brane. Its long wavelength dynamics is the same as a massive Abelian vector field coupled to gravity in AdS{sub d} space.
Not Available
1985-06-01
Consafe is now using a computer-aided design and drafting system adapting its multipurpose support vessels (MSVS) to specific user requirements. The vessels are based on the concept of standard container modules adapted into living quarters, workshops, service units, offices with each application for a specific project demanding a unique mix. There is also the need for constant refurbishment program as service conditions take their toll on the modules. The computer-aided design system is described.
NASA Astrophysics Data System (ADS)
Borsato, Riccardo; Ohlsson Sax, Olof; Sfondrini, Alessandro; Stefański, Bogdan, Jr.; Torrielli, Alessandro
2013-09-01
We determine the all-loop dressing phases of the AdS3/CFT2 integrable system related to type IIB string theory on AdS3×S3×T4 by solving the recently found crossing relations and studying their singularity structure. The two resulting phases present a novel structure with respect to the ones appearing in AdS5/CFT4 and AdS4/CFT3. In the strongly coupled regime, their leading order reduces to the universal Arutyunov-Frolov-Staudacher phase as expected. We also compute their subleading order and compare it with recent one-loop perturbative results and comment on their weak-coupling expansion.
Bubbling geometries for AdS2× S2
NASA Astrophysics Data System (ADS)
Lunin, Oleg
2015-10-01
We construct BPS geometries describing normalizable excitations of AdS2×S2. All regular horizon-free solutions are parameterized by two harmonic functions in R 3 with sources along closed curves. This local structure is reminiscent of the "bubbling solutions" for the other AdS p ×S q cases, however, due to peculiar asymptotic properties of AdS2, one copy of R 3 does not cover the entire space, and we discuss the procedure for analytic continuation, which leads to a nontrivial topological structure of the new geometries. We also study supersymmetric brane probes on the new geometries, which represent the AdS2×S2 counterparts of the giant gravitons.
Detecting pulsars with interstellar scintillation in variance images
NASA Astrophysics Data System (ADS)
Dai, S.; Johnston, S.; Bell, M. E.; Coles, W. A.; Hobbs, G.; Ekers, R. D.; Lenc, E.
2016-11-01
Pulsars are the only cosmic radio sources known to be sufficiently compact to show diffractive interstellar scintillations. Images of the variance of radio signals in both time and frequency can be used to detect pulsars in large-scale continuum surveys using the next generation of synthesis radio telescopes. This technique allows a search over the full field of view while avoiding the need for expensive pixel-by-pixel high time resolution searches. We investigate the sensitivity of detecting pulsars in variance images. We show that variance images are most sensitive to pulsars whose scintillation time-scales and bandwidths are close to the subintegration time and channel bandwidth. Therefore, in order to maximize the detection of pulsars for a given radio continuum survey, it is essential to retain a high time and frequency resolution, allowing us to make variance images sensitive to pulsars with different scintillation properties. We demonstrate the technique with Murchision Widefield Array data and show that variance images can indeed lead to the detection of pulsars by distinguishing them from other radio sources.
Analysis of Variance Components for Genetic Markers with Unphased Genotypes.
Wang, Tao
2016-01-01
An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions.
Increased spatial variance accompanies reorganization of two continental shelf ecosystems.
Litzow, Michael A; Urban, J Daniel; Laurel, Benjamin J
2008-09-01
Phase transitions between alternate stable states in marine ecosystems lead to disruptive changes in ecosystem services, especially fisheries productivity. We used trawl survey data spanning phase transitions in the North Pacific (Gulf of Alaska) and the North Atlantic (Scotian Shelf) to test for increases in ecosystem variability that might provide early warning of such transitions. In both time series, elevated spatial variability in a measure of community composition (ratio of cod [Gadus sp.] abundance to prey abundance) accompanied transitions between ecosystem states, and variability was negatively correlated with distance from the ecosystem transition point. In the Gulf of Alaska, where the phase transition was apparently the result of a sudden perturbation (climate regime shift), variance increased one year before the transition in mean state occurred. On the Scotian Shelf, where ecosystem reorganization was the result of persistent overfishing, a significant increase in variance occurred three years before the transition in mean state was detected. However, we could not reject the alternate explanation that increased variance may also have simply been inherent to the final stable state in that ecosystem. Increased variance has been previously observed around transition points in models, but rarely in real ecosystems, and our results demonstrate the possible management value in tracking the variance of key parameters in exploited ecosystems.
Bonnier, Catherine; Bender, Timothy P
2015-01-01
We are reporting the unexpected reaction between bromo-boron subphthalocyanine (Br-BsubPc) and THF, 1,4-dioxane or γ-butyrolactone that results in the ring opening of the solvent and its addition into the BsubPc moiety. Under heating, the endocyclic C-O bond of the solvent is cleaved and the corresponding bromoalkoxy-BsubPc derivative is obtained. These novel alkoxy-BsubPc derivatives have remaining alkyl-bromides suitable for further functionalization. The alkoxy-BsubPcs maintain the characteristic strongly absorption in visible spectrum and their fluorescence quantum yields.
[Value-Added--Adding Economic Value in the Food Industry].
ERIC Educational Resources Information Center
Welch, Mary A., Ed.
1989-01-01
This booklet focuses on the economic concept of "value added" to goods and services. A student activity worksheet illustrates how the steps involved in processing food are examples of the concept of value added. The booklet further links food processing to the idea of value added to the Gross National Product (GNP). Discussion questions, a student…
Impact of Damping Uncertainty on SEA Model Response Variance
NASA Technical Reports Server (NTRS)
Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand
2010-01-01
Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.
Monte Carlo variance reduction approaches for non-Boltzmann tallies
Booth, T.E.
1992-12-01
Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed.
The positioning algorithm based on feature variance of billet character
NASA Astrophysics Data System (ADS)
Yi, Jiansong; Hong, Hanyu; Shi, Yu; Chen, Hongyang
2015-12-01
In the process of steel billets recognition on the production line, the key problem is how to determine the position of the billet from complex scenes. To solve this problem, this paper presents a positioning algorithm based on the feature variance of billet character. Using the largest intra-cluster variance recursive method based on multilevel filtering, the billet characters are segmented completely from the complex scenes. There are three rows of characters on each steel billet, we are able to determine whether the connected regions, which satisfy the condition of the feature variance, are on a straight line. Then we can accurately locate the steel billet. The experimental results demonstrated that the proposed method in this paper is competitive to other methods in positioning the characters and it also reduce the running time. The algorithm can provide a better basis for the character recognition.
Saturation of number variance in embedded random-matrix ensembles.
Prakash, Ravi; Pandey, Akhilesh
2016-05-01
We study fluctuation properties of embedded random matrix ensembles of noninteracting particles. For ensemble of two noninteracting particle systems, we find that unlike the spectra of classical random matrices, correlation functions are nonstationary. In the locally stationary region of spectra, we study the number variance and the spacing distributions. The spacing distributions follow the Poisson statistics, which is a key behavior of uncorrelated spectra. The number variance varies linearly as in the Poisson case for short correlation lengths but a kind of regularization occurs for large correlation lengths, and the number variance approaches saturation values. These results are known in the study of integrable systems but are being demonstrated for the first time in random matrix theory. We conjecture that the interacting particle cases, which exhibit the characteristics of classical random matrices for short correlation lengths, will also show saturation effects for large correlation lengths. PMID:27300898
Heterogeneity of variances for carcass traits by percentage Brahman inheritance.
Crews, D H; Franke, D E
1998-07-01
Heterogeneity of carcass trait variances due to level of Brahman inheritance was investigated using records from straightbred and crossbred steers produced from 1970 to 1988 (n = 1,530). Angus, Brahman, Charolais, and Hereford sires were mated to straightbred and crossbred cows to produce straightbred, F1, back-cross, three-breed cross, and two-, three-, and four-breed rotational crossbred steers in four non-overlapping generations. At weaning (mean age = 220 d), steers were randomly assigned within breed group directly to the feedlot for 200 d, or to a backgrounding and stocker phase before feeding. Stocker steers were fed from 70 to 100 d in generations 1 and 2 and from 60 to 120 d in generations 3 and 4. Carcass traits included hot carcass weight, subcutaneous fat thickness and longissimus muscle area at the 12-13th rib interface, carcass weight-adjusted longissimus muscle area, USDA yield grade, estimated total lean yield, marbling score, and Warner-Bratzler shear force. Steers were classified as either high Brahman (50 to 100% Brahman), moderate Brahman (25 to 49% Brahman), or low Brahman (0 to 24% Brahman) inheritance. Two types of animal models were fit with regard to level of Brahman inheritance. One model assumed similar variances between pairs of Brahman inheritance groups, and the second model assumed different variances between pairs of Brahman inheritance groups. Fixed sources of variation in both models included direct and maternal additive and nonadditive breed effects, year of birth, and slaughter age. Variances were estimated using derivative free REML procedures. Likelihood ratio tests were used to compare models. The model accounting for heterogeneous variances had a greater likelihood (P < .001) than the model assuming homogeneous variances for hot carcass weight, longissimus muscle area, weight-adjusted longissimus muscle area, total lean yield, and Warner-Bratzler shear force, indicating improved fit with percentage Brahman inheritance
The dynamic Allan Variance IV: characterization of atomic clock anomalies.
Galleani, Lorenzo; Tavella, Patrizia
2015-05-01
The number of applications where precise clocks play a key role is steadily increasing, satellite navigation being the main example. Precise clock anomalies are hence critical events, and their characterization is a fundamental problem. When an anomaly occurs, the clock stability changes with time, and this variation can be characterized with the dynamic Allan variance (DAVAR). We obtain the DAVAR for a series of common clock anomalies, namely, a sinusoidal term, a phase jump, a frequency jump, and a sudden change in the clock noise variance. These anomalies are particularly common in space clocks. Our analytic results clarify how the clock stability changes during these anomalies.
The dynamic Allan Variance IV: characterization of atomic clock anomalies.
Galleani, Lorenzo; Tavella, Patrizia
2015-05-01
The number of applications where precise clocks play a key role is steadily increasing, satellite navigation being the main example. Precise clock anomalies are hence critical events, and their characterization is a fundamental problem. When an anomaly occurs, the clock stability changes with time, and this variation can be characterized with the dynamic Allan variance (DAVAR). We obtain the DAVAR for a series of common clock anomalies, namely, a sinusoidal term, a phase jump, a frequency jump, and a sudden change in the clock noise variance. These anomalies are particularly common in space clocks. Our analytic results clarify how the clock stability changes during these anomalies. PMID:25965674
Entropy, Fisher Information and Variance with Frost-Musulin Potenial
NASA Astrophysics Data System (ADS)
Idiodi, J. O. A.; Onate, C. A.
2016-09-01
This study presents the Shannon and Renyi information entropy for both position and momentum space and the Fisher information for the position-dependent mass Schrödinger equation with the Frost-Musulin potential. The analysis of the quantum mechanical probability has been obtained via the Fisher information. The variance information of this potential is equally computed. This controls both the chemical properties and physical properties of some of the molecular systems. We have observed the behaviour of the Shannon entropy. Renyi entropy, Fisher information and variance with the quantum number n respectively.
Heterogeneity of variances for carcass traits by percentage Brahman inheritance.
Crews, D H; Franke, D E
1998-07-01
Heterogeneity of carcass trait variances due to level of Brahman inheritance was investigated using records from straightbred and crossbred steers produced from 1970 to 1988 (n = 1,530). Angus, Brahman, Charolais, and Hereford sires were mated to straightbred and crossbred cows to produce straightbred, F1, back-cross, three-breed cross, and two-, three-, and four-breed rotational crossbred steers in four non-overlapping generations. At weaning (mean age = 220 d), steers were randomly assigned within breed group directly to the feedlot for 200 d, or to a backgrounding and stocker phase before feeding. Stocker steers were fed from 70 to 100 d in generations 1 and 2 and from 60 to 120 d in generations 3 and 4. Carcass traits included hot carcass weight, subcutaneous fat thickness and longissimus muscle area at the 12-13th rib interface, carcass weight-adjusted longissimus muscle area, USDA yield grade, estimated total lean yield, marbling score, and Warner-Bratzler shear force. Steers were classified as either high Brahman (50 to 100% Brahman), moderate Brahman (25 to 49% Brahman), or low Brahman (0 to 24% Brahman) inheritance. Two types of animal models were fit with regard to level of Brahman inheritance. One model assumed similar variances between pairs of Brahman inheritance groups, and the second model assumed different variances between pairs of Brahman inheritance groups. Fixed sources of variation in both models included direct and maternal additive and nonadditive breed effects, year of birth, and slaughter age. Variances were estimated using derivative free REML procedures. Likelihood ratio tests were used to compare models. The model accounting for heterogeneous variances had a greater likelihood (P < .001) than the model assuming homogeneous variances for hot carcass weight, longissimus muscle area, weight-adjusted longissimus muscle area, total lean yield, and Warner-Bratzler shear force, indicating improved fit with percentage Brahman inheritance
Action growth for AdS black holes
NASA Astrophysics Data System (ADS)
Cai, Rong-Gen; Ruan, Shan-Ming; Wang, Shao-Jiang; Yang, Run-Qiu; Peng, Rong-Hui
2016-09-01
Recently a Complexity-Action (CA) duality conjecture has been proposed, which relates the quantum complexity of a holographic boundary state to the action of a Wheeler-DeWitt (WDW) patch in the anti-de Sitter (AdS) bulk. In this paper we further investigate the duality conjecture for stationary AdS black holes and derive some exact results for the growth rate of action within the Wheeler-DeWitt (WDW) patch at late time approximation, which is supposed to be dual to the growth rate of quantum complexity of holographic state. Based on the results from the general D-dimensional Reissner-Nordström (RN)-AdS black hole, rotating/charged Bañados-Teitelboim-Zanelli (BTZ) black hole, Kerr-AdS black hole and charged Gauss-Bonnet-AdS black hole, we present a universal formula for the action growth expressed in terms of some thermodynamical quantities associated with the outer and inner horizons of the AdS black holes. And we leave the conjecture unchanged that the stationary AdS black hole in Einstein gravity is the fastest computer in nature.
Superstring theory in AdS(3) and plane waves
NASA Astrophysics Data System (ADS)
Son, John Sang Won
This thesis is devoted to the study of string theory in AdS 3 and its applications to recent developments in string theory. The difficulties associated with formulating a consistent string theory in AdS3 and its underlying SL(2, R) WZW model are explained. We describe how these difficulties can be overcome by assuming that the SL(2, R) WZW model contains spectral flow symmetry. The existence of spectral flow symmetry in the fully quantum treatment is proved by a calculation of the one-loop string partition function. We consider Euclidean AdS 3 with the time direction periodically identified, and compute the torus partition function in this background. The string spectrum can be reproduced by viewing the one-loop calculation as the free energy of a gas of strings, thus providing a rigorous proof of the results based on spectral flow arguments. Next, we turn to spacetimes that are quotients of AdS 3, which include the BTZ black hole and conical spaces. Strings propagating in the conical space are described by taking an orbifold of strings in AdS3. We show that the twisted states of these orbifolds can be obtained by fractional spectral flow. We show that the shift in the ground state energy usually associated with orbifold twists is absent in this case, and offer a unified framework in which to view spectral flow. Lastly, we consider the RNS superstrings in AdS 3 x S3 x M , where M may be K3 or T 4, based on supersymmetric extensions of SL(2, R) and SU(2) WZW models. We construct the physical states and calculate the spectrum. A subsector of this theory describes strings propagating in the six dimensional plane wave obtained by the Penrose limit of AdS3 x S3 x M . We reproduce the plane wave spectrum by taking J and the radius to infinity. We show that the plane wave spectrum actually coincides with the large J spectrum at fixed radius, i.e. in AdS3 x S3. Relation to some recent topics of interest such as the Frolov-Tseytlin string and strings with critical tension
ERIC Educational Resources Information Center
Starns, Jeffrey J.; Rotello, Caren M.; Hautus, Michael J.
2014-01-01
We tested the dual process and unequal variance signal detection models by jointly modeling recognition and source confidence ratings. The 2 approaches make unique predictions for the slope of the recognition memory zROC function for items with correct versus incorrect source decisions. The standard bivariate Gaussian version of the unequal…
The unique biochemistry of methanogenesis.
Deppenmeier, Uwe
2002-01-01
Methanogenic archaea have an unusual type of metabolism because they use H2 + CO2, formate, methylated C1 compounds, or acetate as energy and carbon sources for growth. The methanogens produce methane as the major end product of their metabolism in a unique energy-generating process. The organisms received much attention because they catalyze the terminal step in the anaerobic breakdown of organic matter under sulfate-limiting conditions and are essential for both the recycling of carbon compounds and the maintenance of the global carbon flux on Earth. Furthermore, methane is an important greenhouse gas that directly contributes to climate changes and global warming. Hence, the understanding of the biochemical processes leading to methane formation are of major interest. This review focuses on the metabolic pathways of methanogenesis that are rather unique and involve a number of unusual enzymes and coenzymes. It will be shown how the previously mentioned substrates are converted to CH4 via the CO2-reducing, methylotrophic, or aceticlastic pathway. All catabolic processes finally lead to the formation of a mixed disulfide from coenzyme M and coenzyme B that functions as an electron acceptor of certain anaerobic respiratory chains. Molecular hydrogen, reduced coenzyme F420, or reduced ferredoxin are used as electron donors. The redox reactions as catalyzed by the membrane-bound electron transport chains are coupled to proton translocation across the cytoplasmic membrane. The resulting electrochemical proton gradient is the driving force for ATP synthesis as catalyzed by an A1A0-type ATP synthase. Other energy-transducing enzymes involved in methanogenesis are the membrane-integral methyltransferase and the formylmethanofuran dehydrogenase complex. The former enzyme is a unique, reversible sodium ion pump that couples methyl-group transfer with the transport of Na+ across the membrane. The formylmethanofuran dehydrogenase is a reversible ion pump that catalyzes
40 CFR 124.64 - Appeals of variances.
Code of Federal Regulations, 2014 CFR
2014-07-01
... anticipated to pose an unacceptable risk to human health or the environment because of bioaccumulation... 40 Protection of Environment 22 2014-07-01 2013-07-01 true Appeals of variances. 124.64 Section 124.64 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS...
40 CFR 124.64 - Appeals of variances.
Code of Federal Regulations, 2013 CFR
2013-07-01
... anticipated to pose an unacceptable risk to human health or the environment because of bioaccumulation... 40 Protection of Environment 23 2013-07-01 2013-07-01 false Appeals of variances. 124.64 Section 124.64 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS...
40 CFR 124.64 - Appeals of variances.
Code of Federal Regulations, 2012 CFR
2012-07-01
... anticipated to pose an unacceptable risk to human health or the environment because of bioaccumulation... 40 Protection of Environment 23 2012-07-01 2012-07-01 false Appeals of variances. 124.64 Section 124.64 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS...
The Variance of Intraclass Correlations in Three and Four Level
ERIC Educational Resources Information Center
Hedges, Larry V.; Hedberg, Eric C.; Kuyper, Arend M.
2012-01-01
Intraclass correlations are used to summarize the variance decomposition in popula- tions with multilevel hierarchical structure. There has recently been considerable interest in estimating intraclass correlations from surveys or designed experiments to provide design parameters for planning future large-scale randomized experiments. The large…
40 CFR 124.62 - Decision on variances.
Code of Federal Regulations, 2010 CFR
2010-07-01
... section 301(i) based on delay in completion of a publicly owned treatment works; (2) After consultation... technology; or (3) Variances under CWA section 316(a) for thermal pollution. (b) The State Director may deny... 124.62 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS...
44 CFR 60.6 - Variances and exceptions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... repair or rehabilitation of historic structures upon a determination that the proposed repair or rehabilitation will not preclude the structure's continued designation as a historic structure and the variance is the minimum necessary to preserve the historic character and design of the structure....
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions. (a... or pathogenic contamination, a treatment lapse or deficiency, or a problem in the operation...
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions. (a... or pathogenic contamination, a treatment lapse or deficiency, or a problem in the operation...
Variance-based uncertainty relations for incompatible observables
NASA Astrophysics Data System (ADS)
Chen, Bin; Cao, Ning-Ping; Fei, Shao-Ming; Long, Gui-Lu
2016-09-01
We formulate uncertainty relations for arbitrary finite number of incompatible observables. Based on the sum of variances of the observables, both Heisenberg-type and Schrödinger-type uncertainty relations are provided. These new lower bounds are stronger in most of the cases than the ones derived from some existing inequalities. Detailed examples are presented.
Explaining Common Variance Shared by Early Numeracy and Literacy
ERIC Educational Resources Information Center
Davidse, N. J.; De Jong, M. T.; Bus, A. G.
2014-01-01
How can it be explained that early literacy and numeracy share variance? We specifically tested whether the correlation between four early literacy skills (rhyming, letter knowledge, emergent writing, and orthographic knowledge) and simple sums (non-symbolic and story condition) reduced after taking into account preschool attention control,…
Dominance, Information, and Hierarchical Scaling of Variance Space.
ERIC Educational Resources Information Center
Ceurvorst, Robert W.; Krus, David J.
1979-01-01
A method for computation of dominance relations and for construction of their corresponding hierarchical structures is presented. The link between dominance and variance allows integration of the mathematical theory of information with least squares statistical procedures without recourse to logarithmic transformations of the data. (Author/CTM)
Variances of Plane Parameters Fitted to Range Data.
Franaszek, Marek
2010-01-01
Formulas for variances of plane parameters fitted with Nonlinear Least Squares to point clouds acquired by 3D imaging systems (e.g., LADAR) are derived. Two different error objective functions used in minimization are discussed: the orthogonal and the directional functions. Comparisons of corresponding formulas suggest the two functions can yield different results when applied to the same dataset.
Comparison of Turbulent Thermal Diffusivity and Scalar Variance Models
NASA Technical Reports Server (NTRS)
Yoder, Dennis A.
2016-01-01
In this study, several variable turbulent Prandtl number formulations are examined for boundary layers, pipe flow, and axisymmetric jets. The model formulations include simple algebraic relations between the thermal diffusivity and turbulent viscosity as well as more complex models that solve transport equations for the thermal variance and its dissipation rate. Results are compared with available data for wall heat transfer and profile measurements of mean temperature, the root-mean-square (RMS) fluctuating temperature, turbulent heat flux and turbulent Prandtl number. For wall-bounded problems, the algebraic models are found to best predict the rise in turbulent Prandtl number near the wall as well as the log-layer temperature profile, while the thermal variance models provide a good representation of the RMS temperature fluctuations. In jet flows, the algebraic models provide no benefit over a constant turbulent Prandtl number approach. Application of the thermal variance models finds that some significantly overpredict the temperature variance in the plume and most underpredict the thermal growth rate of the jet. The models yield very similar fluctuating temperature intensities in jets from straight pipes and smooth contraction nozzles, in contrast to data that indicate the latter should have noticeably higher values. For the particular low subsonic heated jet cases examined, changes in the turbulent Prandtl number had no effect on the centerline velocity decay.
21 CFR 821.2 - Exemptions and variances.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Exemptions and variances. 821.2 Section 821.2 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL... unnecessary; (3) A complete description of alternative steps that are available, or that the petitioner...
21 CFR 898.14 - Exemptions and variances.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Exemptions and variances. 898.14 Section 898.14 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... performance standard is unnecessary or unfeasible; (3) A complete description of alternative steps that...
Intuitive Analysis of Variance-- A Formative Assessment Approach
ERIC Educational Resources Information Center
Trumpower, David
2013-01-01
This article describes an assessment activity that can show students how much they intuitively understand about statistics, but also alert them to common misunderstandings. How the activity can be used formatively to help improve students' conceptual understanding of analysis of variance is discussed. (Contains 1 figure and 1 table.)
GalaxyCount: Galaxy counts and variance calculator
NASA Astrophysics Data System (ADS)
Bland-Hawthorn, Joss; Ellis, Simon
2013-12-01
GalaxyCount calculates the number and standard deviation of galaxies in a magnitude limited observation of a given area. The methods to calculate both the number and standard deviation may be selected from different options. Variances may be computed for circular, elliptical and rectangular window functions.
Analysis of Variance: What Is Your Statistical Software Actually Doing?
ERIC Educational Resources Information Center
Li, Jian; Lomax, Richard G.
2011-01-01
Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…
A Visual Model for the Variance and Standard Deviation
ERIC Educational Resources Information Center
Orris, J. B.
2011-01-01
This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.
Variance in Math Achievement Attributable to Visual Cognitive Constructs
ERIC Educational Resources Information Center
Oehlert, Jeremy J.
2012-01-01
Previous research has reported positive correlations between math achievement and the cognitive constructs of spatial visualization, working memory, and general intelligence; however, no single study has assessed variance in math achievement attributable to all three constructs, examined in combination. The current study fills this gap in the…
Caution on the Use of Variance Ratios: A Comment.
ERIC Educational Resources Information Center
Shaffer, Juliet Popper
1992-01-01
Several metanalytic studies of group variability use variance ratios as measures of effect size. Problems with this approach are discussed, including limitations of using means and medians of ratios. Mean logarithms and the geometric mean are not adversely affected by the arbitrary choice of numerator. (SLD)
Some Computer Programs for Selected Problems in Analysis of Variance.
ERIC Educational Resources Information Center
Edwards, Lynne K.; Bland, Patricia C.
Selected examples using the statistical packages Statistical Package for the Social Sciences (SPSS), the Statistical Analysis System (SAS), and BMDP are presented to facilitate their use and encourage appropriate uses in: (1) a hierarchical design; (2) a confounded factorial design; and (3) variance component estimation procedures. To illustrate…
Module organization and variance in protein-protein interaction networks
NASA Astrophysics Data System (ADS)
Lin, Chun-Yu; Lee, Tsai-Ling; Chiu, Yi-Yuan; Lin, Yi-Wei; Lo, Yu-Shu; Lin, Chih-Ta; Yang, Jinn-Moon
2015-03-01
A module is a group of closely related proteins that act in concert to perform specific biological functions through protein-protein interactions (PPIs) that occur in time and space. However, the underlying module organization and variance remain unclear. In this study, we collected module templates to infer respective module families, including 58,041 homologous modules in 1,678 species, and PPI families using searches of complete genomic database. We then derived PPI evolution scores and interface evolution scores to describe the module elements, including core and ring components. Functions of core components were highly correlated with those of essential genes. In comparison with ring components, core proteins/PPIs were conserved across multiple species. Subsequently, protein/module variance of PPI networks confirmed that core components form dynamic network hubs and play key roles in various biological functions. Based on the analyses of gene essentiality, module variance, and gene co-expression, we summarize the observations of module organization and variance as follows: 1) a module consists of core and ring components; 2) core components perform major biological functions and collaborate with ring components to execute certain functions in some cases; 3) core components are more conserved and essential during organizational changes in different biological states or conditions.
40 CFR 52.1390 - Missoula variance provision.
Code of Federal Regulations, 2012 CFR
2012-07-01
... by the Montana Board of Health and Environmental Sciences on June 28, 1991 and submitted by the... 40 Protection of Environment 4 2012-07-01 2012-07-01 false Missoula variance provision. 52.1390 Section 52.1390 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR...
40 CFR 52.1390 - Missoula variance provision.
Code of Federal Regulations, 2013 CFR
2013-07-01
... by the Montana Board of Health and Environmental Sciences on June 28, 1991 and submitted by the... 40 Protection of Environment 4 2013-07-01 2013-07-01 false Missoula variance provision. 52.1390 Section 52.1390 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR...
40 CFR 52.1390 - Missoula variance provision.
Code of Federal Regulations, 2010 CFR
2010-07-01
... by the Montana Board of Health and Environmental Sciences on June 28, 1991 and submitted by the... 40 Protection of Environment 4 2010-07-01 2010-07-01 false Missoula variance provision. 52.1390 Section 52.1390 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR...
40 CFR 52.1390 - Missoula variance provision.
Code of Federal Regulations, 2014 CFR
2014-07-01
... by the Montana Board of Health and Environmental Sciences on June 28, 1991 and submitted by the... 40 Protection of Environment 4 2014-07-01 2014-07-01 false Missoula variance provision. 52.1390 Section 52.1390 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR...
40 CFR 52.1390 - Missoula variance provision.
Code of Federal Regulations, 2011 CFR
2011-07-01
... by the Montana Board of Health and Environmental Sciences on June 28, 1991 and submitted by the... 40 Protection of Environment 4 2011-07-01 2011-07-01 false Missoula variance provision. 52.1390 Section 52.1390 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR...
40 CFR 142.43 - Disposition of a variance request.
Code of Federal Regulations, 2010 CFR
2010-07-01
... specified in § 142.40(a) such notice shall provide that the variance will be terminated when the system... Administrator that the system has failed to comply with any requirements of a final schedule issued pursuant to... health of persons or upon a finding that the public water system has failed to comply with monitoring...
40 CFR 142.43 - Disposition of a variance request.
Code of Federal Regulations, 2011 CFR
2011-07-01
... specified in § 142.40(a) such notice shall provide that the variance will be terminated when the system... Administrator that the system has failed to comply with any requirements of a final schedule issued pursuant to... health of persons or upon a finding that the public water system has failed to comply with monitoring...
Unbiased Estimates of Variance Components with Bootstrap Procedures
ERIC Educational Resources Information Center
Brennan, Robert L.
2007-01-01
This article provides general procedures for obtaining unbiased estimates of variance components for any random-model balanced design under any bootstrap sampling plan, with the focus on designs of the type typically used in generalizability theory. The results reported here are particularly helpful when the bootstrap is used to estimate standard…
Genetic Variance in the SES-IQ Correlation.
ERIC Educational Resources Information Center
Eckland, Bruce K.
1979-01-01
Discusses questions dealing with genetic aspects of the correlation between IQ and socioeconomic status (SES). Questions include: How does assortative mating affect the genetic variance of IQ? Is the relationship between an individual's IQ and adult SES a causal one? And how can IQ research improve schools and schooling? (Author/DB)
Variance Estimation of Imputed Survey Data. Working Paper Series.
ERIC Educational Resources Information Center
Zhang, Fan; Brick, Mike; Kaufman, Steven; Walter, Elizabeth
Missing data is a common problem in virtually all surveys. This study focuses on variance estimation and its consequences for analysis of survey data from the National Center for Education Statistics (NCES). Methods suggested by C. Sarndal (1992), S. Kaufman (1996), and S. Shao and R. Sitter (1996) are reviewed in detail. In section 3, the…
Exploratory Multivariate Analysis of Variance: Contrasts and Variables.
ERIC Educational Resources Information Center
Barcikowski, Robert S.; Elliott, Ronald S.
The contribution of individual variables to overall multivariate significance in a multivariate analysis of variance (MANOVA) is investigated using a combination of canonical discriminant analysis and Roy-Bose simultaneous confidence intervals. Difficulties with this procedure are discussed, and its advantages are illustrated using examples based…
[ECoG classification based on wavelet variance].
Yan, Shiyu; Liu, Chong; Wang, Hong; Zhao, Haibin
2013-06-01
For a typical electrocorticogram (ECoG)-based brain-computer interface (BCI) system in which the subject's task is to imagine movements of either the left small finger or the tongue, we proposed a feature extraction algorithm using wavelet variance. Firstly the definition and significance of wavelet variance were brought out and taken as feature based on the discussion of wavelet transform. Six channels with most distinctive features were selected from 64 channels for analysis. Consequently the EEG data were decomposed using db4 wavelet. The wavelet coeffi-cient variances containing Mu rhythm and Beta rhythm were taken out as features based on ERD/ERS phenomenon. The features were classified linearly with an algorithm of cross validation. The results of off-line analysis showed that high classification accuracies of 90. 24% and 93. 77% for training and test data set were achieved, the wavelet vari-ance had characteristics of simplicity and effectiveness and it was suitable for feature extraction in BCI research. K PMID:23865300
Unique features of space reactors
Buden, D.
1990-01-01
Space reactors are designed to meet a unique set of requirements; they must be sufficiently compact to be launched in a rocket to their operational location, operate for many years without maintenance and servicing, operate in extreme environments, and reject heat by radiation to space. To meet these restrictions, operating temperatures are much greater than in terrestrial power plants, and the reactors tend to have a fast neutron spectrum. Currently, a new generation of space reactor power plants is being developed. The major effort is in the SP-100 program, where the power plant is being designed for seven years of full power, and no maintenance operation at a reactor outlet operating temperature of 1350 K. 8 refs., 3 figs., 1 tab.
The Probabilities of Unique Events
Khemlani, Sangeet S.; Lotstein, Max; Johnson-Laird, Phil
2012-01-01
Many theorists argue that the probabilities of unique events, even real possibilities such as President Obama's re-election, are meaningless. As a consequence, psychologists have seldom investigated them. We propose a new theory (implemented in a computer program) in which such estimates depend on an intuitive non-numerical system capable only of simple procedures, and a deliberative system that maps intuitions into numbers. The theory predicts that estimates of the probabilities of conjunctions should often tend to split the difference between the probabilities of the two conjuncts. We report two experiments showing that individuals commit such violations of the probability calculus, and corroborating other predictions of the theory, e.g., individuals err in the same way even when they make non-numerical verbal estimates, such as that an event is highly improbable. PMID:23056224
Split liver transplantation: What's unique?
Dalal, Aparna R
2015-09-24
The intraoperative management of split liver transplantation (SLT) has some unique features as compared to routine whole liver transplantations. Only the liver has this special ability to regenerate that confers benefits in survival and quality of life for two instead of one by splitting livers. Primary graft dysfunction may result from small for size syndrome. Graft weight to recipient body weight ratio is significant for both trisegmental and hemiliver grafts. Intraoperative surgical techniques aim to reduce portal hyperperfusion and decrease venous portal pressure. Ischemic preconditioning can be instituted to protect against ischemic reperfusion injury which impacts graft regeneration. Advancement of the technique of SLT is essential as use of split cadaveric grafts expands the donor pool and potentially has an excellent future. PMID:26421261
Variance in the reproductive success of dominant male mountain gorillas.
Robbins, Andrew M; Gray, Maryke; Uwingeli, Prosper; Mburanumwe, Innocent; Kagoda, Edwin; Robbins, Martha M
2014-10-01
Using 30 years of demographic data from 15 groups, this study estimates how harem size, female fertility, and offspring survival may contribute to variance in the siring rates of dominant male mountain gorillas throughout the Virunga Volcano Region. As predicted for polygynous species, differences in harem size were the greatest source of variance in the siring rate, whereas differences in female fertility and offspring survival were relatively minor. Harem size was positively correlated with offspring survival, even after removing all known and suspected cases of infanticide, so the correlation does not seem to reflect differences in the ability of males to protect their offspring. Harem size was not significantly correlated with female fertility, which is consistent with the hypothesis that mountain gorillas have minimal feeding competition. Harem size, offspring survival, and siring rates were not significantly correlated with the proportion of dominant tenures that occurred in multimale groups versus one-male groups; even though infanticide is less likely when those tenures end in multimale groups than one-male groups. In contrast with the relatively small contribution of offspring survival to variance in the siring rates of this study, offspring survival is a major source of variance in the male reproductive success of western gorillas, which have greater predation risks and significantly higher rates of infanticide. If differences in offspring protection are less important among male mountain gorillas than western gorillas, then the relative importance of other factors may be greater for mountain gorillas. Thus, our study illustrates how variance in male reproductive success and its components can differ between closely related species.
Gravity Wave Variances and Propagation Derived from AIRS Radiances
NASA Technical Reports Server (NTRS)
Gong, Jie; Wu, Dong L.; Eckermann, S. D.
2012-01-01
As the first gravity wave (GW) climatology study using nadir-viewing infrared sounders, 50 Atmospheric Infrared Sounder (AIRS) radiance channels are selected to estimate GW variances at pressure levels between 2-100 hPa. The GW variance for each scan in the cross-track direction is derived from radiance perturbations in the scan, independently of adjacent scans along the orbit. Since the scanning swaths are perpendicular to the satellite orbits, which are inclined meridionally at most latitudes, the zonal component of GW propagation can be inferred by differencing the variances derived between the westmost and the eastmost viewing angles. Consistent with previous GW studies using various satellite instruments, monthly mean AIRS variance shows large enhancements over meridionally oriented mountain ranges as well as some islands at winter hemisphere high latitudes. Enhanced wave activities are also found above tropical deep convective regions. GWs prefer to propagate westward above mountain ranges, and eastward above deep convection. AIRS 90 field-of-views (FOVs), ranging from +48 deg. to -48 deg. off nadir, can detect large-amplitude GWs with a phase velocity propagating preferentially at steep angles (e.g., those from orographic and convective sources). The annual cycle dominates the GW variances and the preferred propagation directions for all latitudes. Indication of a weak two-year variation in the tropics is found, which is presumably related to the Quasi-biennial oscillation (QBO). AIRS geometry makes its out-tracks capable of detecting GWs with vertical wavelengths substantially shorter than the thickness of instrument weighting functions. The novel discovery of AIRS capability of observing shallow inertia GWs will expand the potential of satellite GW remote sensing and provide further constraints on the GW drag parameterization schemes in the general circulation models (GCMs).
Entanglement entropy for free scalar fields in AdS
NASA Astrophysics Data System (ADS)
Sugishita, Sotaro
2016-09-01
We compute entanglement entropy for free massive scalar fields in anti-de Sitter (AdS) space. The entangling surface is a minimal surface whose boundary is a sphere at the boundary of AdS. The entropy can be evaluated from the thermal free energy of the fields on a topological black hole by using the replica method. In odd-dimensional AdS, exact expressions of the Rényi entropy S n are obtained for arbitrary n. We also evaluate 1-loop corrections coming from the scalar fields to holographic entanglement entropy. Applying the results, we compute the leading difference of entanglement entropy between two holographic CFTs related by a renormalization group flow triggered by a double trace deformation. The difference is proportional to the shift of a central charge under the flow.
Asymptotically AdS spacetimes with a timelike Kasner singularity
NASA Astrophysics Data System (ADS)
Ren, Jie
2016-07-01
Exact solutions to Einstein's equations for holographic models are presented and studied. The IR geometry has a timelike cousin of the Kasner singularity, which is the less generic case of the BKL (Belinski-Khalatnikov-Lifshitz) singularity, and the UV is asymptotically AdS. This solution describes a holographic RG flow between them. The solution's appearance is an interpolation between the planar AdS black hole and the AdS soliton. The causality constraint is always satisfied. The entanglement entropy and Wilson loops are discussed. The boundary condition for the current-current correlation function and the Laplacian in the IR is examined. There is no infalling wave in the IR, but instead, there is a normalizable solution in the IR. In a special case, a hyperscaling-violating geometry is obtained after a dimensional reduction.
New massive gravity and AdS(4) counterterms.
Jatkar, Dileep P; Sinha, Aninda
2011-04-29
We show that the recently proposed Dirac-Born-Infeld extension of new massive gravity emerges naturally as a counterterm in four-dimensional anti-de Sitter space (AdS(4)). The resulting on-shell Euclidean action is independent of the cutoff at zero temperature. We also find that the same choice of counterterm gives the usual area law for the AdS(4) Schwarzschild black hole entropy in a cutoff-independent manner. The parameter values of the resulting counterterm action correspond to a c=0 theory in the context of the duality between AdS(3) gravity and two-dimensional conformal field theory. We rewrite this theory in terms of the gauge field that is used to recast 3D gravity as a Chern-Simons theory. PMID:21635026
Detailed ultraviolet asymptotics for AdS scalar field perturbations
NASA Astrophysics Data System (ADS)
Evnin, Oleg; Jai-akson, Puttarak
2016-04-01
We present a range of methods suitable for accurate evaluation of the leading asymptotics for integrals of products of Jacobi polynomials in limits when the degrees of some or all polynomials inside the integral become large. The structures in question have recently emerged in the context of effective descriptions of small amplitude perturbations in anti-de Sitter (AdS) spacetime. The limit of high degree polynomials corresponds in this situation to effective interactions involving extreme short-wavelength modes, whose dynamics is crucial for the turbulent instabilities that determine the ultimate fate of small AdS perturbations. We explicitly apply the relevant asymptotic techniques to the case of a self-interacting probe scalar field in AdS and extract a detailed form of the leading large degree behavior, including closed form analytic expressions for the numerical coefficients appearing in the asymptotics.
Holography and AdS4 self-gravitating dyons
NASA Astrophysics Data System (ADS)
Lugo, A. R.; Moreno, E. F.; Schaposnik, F. A.
2010-11-01
We present a self-gravitating dyon solution of the Einstein-Yang-Mills-Higgs equations of motion in asymptotically AdS space. The back reaction of gauge and Higgs fields on the space-time geometry leads to the metric of an asymptotically AdS black hole. Using the gauge/gravity correspondence we analyze relevant properties of the finite temperature quantum field theory defined on the boundary. In particular we identify an order operator, characterize a phase transition of the dual theory on the border and also compute the expectation value of the finite temperature Wilson loop.
AdS box graphs, unitarity and operator product expansions
NASA Astrophysics Data System (ADS)
Hoffmann, L.; Mesref, L.; Rühl, W.
2000-11-01
We develop a method of singularity analysis for conformal graphs which, in particular, is applicable to the holographic image of AdS supergravity theory. It can be used to determine the critical exponents for any such graph in a given channel. These exponents determine the towers of conformal blocks that are exchanged in this channel. We analyze the scalar AdS box graph and show that it has the same critical exponents as the corresponding CFT box graph. Thus pairs of external fields couple to the same exchanged conformal blocks in both theories. This is looked upon as a general structural argument supporting the Maldacena hypothesis.
Phases of global AdS black holes
NASA Astrophysics Data System (ADS)
Basu, Pallab; Krishnan, Chethan; Subramanian, P. N. Bala
2016-06-01
We study the phases of gravity coupled to a charged scalar and gauge field in an asymptotically Anti-de Sitter spacetime ( AdS 4) in the grand canonical ensemble. For the conformally coupled scalar, an intricate phase diagram is charted out between the four relevant solutions: global AdS, boson star, Reissner-Nordstrom black hole and the hairy black hole. The nature of the phase diagram undergoes qualitative changes as the charge of the scalar is changed, which we discuss. We also discuss the new features that arise in the extremal limit.
CYP1B1: a unique gene with unique characteristics.
Faiq, Muneeb A; Dada, Rima; Sharma, Reetika; Saluja, Daman; Dada, Tanuj
2014-01-01
CYP1B1, a recently described dioxin inducible oxidoreductase, is a member of the cytochrome P450 superfamily involved in the metabolism of estradiol, retinol, benzo[a]pyrene, tamoxifen, melatonin, sterols etc. It plays important roles in numerous physiological processes and is expressed at mRNA level in many tissues and anatomical compartments. CYP1B1 has been implicated in scores of disorders. Analyses of the recent studies suggest that CYP1B1 can serve as a universal/ideal cancer marker and a candidate gene for predictive diagnosis. There is plethora of literature available about certain aspects of CYP1B1 that have not been interpreted, discussed and philosophized upon. The present analysis examines CYP1B1 as a peculiar gene with certain distinctive characteristics like the uniqueness in its chromosomal location, gene structure and organization, involvement in developmentally important disorders, tissue specific, not only expression, but splicing, potential as a universal cancer marker due to its involvement in key aspects of cellular metabolism, use in diagnosis and predictive diagnosis of various diseases and the importance and function of CYP1B1 mRNA in addition to the regular translation. Also CYP1B1 is very difficult to express in heterologous expression systems, thereby, halting its functional studies. Here we review and analyze these exceptional and startling characteristics of CYP1B1 with inputs from our own experiences in order to get a better insight into its molecular biology in health and disease. This may help to further understand the etiopathomechanistic aspects of CYP1B1 mediated diseases paving way for better research strategies and improved clinical management. PMID:25658124
NASA Astrophysics Data System (ADS)
Turco, M.; Milelli, M.
2009-09-01
skill scores of two competitive forecast. It is important to underline that the conclusions refer to the analysis of the Piemonte operational alert system, so they cannot be directly taken as universally true. But we think that some of the main lessons that can be derived from this study could be useful for the meteorological community. In details, the main conclusions are the following: - despite the overall improvement in global scale and the fact that the resolution of the limited area models has increased considerably over recent years, the QPF produced by the meteorological models involved in this study has not improved enough to allow its direct use, that is, the subjective HQPF continues to offer the best performance; - in the forecast process, the step where humans have the largest added value with respect to mathematical models, is the communication. In fact the human characterisation and communication of the forecast uncertainty to end users cannot be replaced by any computer code; - eventually, although there is no novelty in this study, we would like to show that the correct application of appropriated statistical techniques permits a better definition and quantification of the errors and, mostly important, allows a correct (unbiased) communication between forecasters and decision makers.
Respiratory infections unique to Asia.
Tsang, Kenneth W; File, Thomas M
2008-11-01
Asia is a highly heterogeneous region with vastly different cultures, social constitutions and populations affected by a wide spectrum of respiratory diseases caused by tropical pathogens. Asian patients with community-acquired pneumonia differ from their Western counterparts in microbiological aetiology, in particular the prominence of Gram-negative organisms, Mycobacterium tuberculosis, Burkholderia pseudomallei and Staphylococcus aureus. In addition, the differences in socioeconomic and health-care infrastructures limit the usefulness of Western management guidelines for pneumonia in Asia. The importance of emerging infectious diseases such as severe acute respiratory syndrome and avian influenza infection remain as close concerns for practising respirologists in Asia. Specific infections such as melioidosis, dengue haemorrhagic fever, scrub typhus, leptospirosis, salmonellosis, penicilliosis marneffei, malaria, amoebiasis, paragonimiasis, strongyloidiasis, gnathostomiasis, trinchinellosis, schistosomiasis and echinococcosis occur commonly in Asia and manifest with a prominent respiratory component. Pulmonary eosinophilia, endemic in parts of Asia, could occur with a wide range of tropical infections. Tropical eosinophilia is believed to be a hyper-sensitivity reaction to degenerating microfilariae trapped in the lungs. This article attempts to address the key respiratory issues in these respiratory infections unique to Asia and highlight the important diagnostic and management issues faced by practising respirologists.
D-branes on AdS flux compactifications
NASA Astrophysics Data System (ADS)
Koerber, Paul; Martucci, Luca
2008-01-01
We study D-branes in Script N = 1 flux compactifications to AdS4. We derive their supersymmetry conditions and express them in terms of background generalized calibrations. Basically because AdS has a boundary, the analysis of stability is more subtle and qualitatively different from the usual case of Minkowski compactifications. For instance, stable D-branes filling AdS4 may wrap trivial internal cycles. Our analysis gives a geometric realization of the four-dimensional field theory approach of Freedman and collaborators. Furthermore, the one-to-one correspondence between the supersymmetry conditions of the background and the existence of generalized calibrations for D-branes is clarified and extended to any supersymmetric flux background that admits a time-like Killing vector and for which all fields are time-independent with respect to the associated time. As explicit examples, we discuss supersymmetric D-branes on IIA nearly Kähler AdS4 flux compactifications.
Dyonic AdS black holes from magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Caldarelli, Marco M.; Dias, Óscar J. C.; Klemm, Dietmar
2009-03-01
We use the AdS/CFT correspondence to argue that large dyonic black holes in anti-de Sitter spacetime are dual to stationary solutions of the equations of relativistic magnetohydrodynamics on the conformal boundary of AdS. The dyonic Kerr-Newman-AdS4 solution corresponds to a charged diamagnetic fluid not subject to any net Lorentz force, due to orthogonal magnetic and electric fields compensating each other. The conserved charges, stress tensor and R-current of the fluid are shown to be in exact agreement with the corresponding quantities of the black hole. Furthermore, we obtain stationary solutions of the Navier-Stokes equations in four dimensions, which yield predictions for (yet to be constructed) charged rotating black strings in AdS5 carrying nonvanishing momentum along the string. Finally, we consider Scherk-Schwarz reduced AdS gravity on a circle. In this theory, large black holes and black strings are dual to lumps of deconfined plasma of the associated CFT. We analyze the effects that a magnetic field introduces in the Rayleigh-Plateau instability of a plasma tube, which is holographically dual to the Gregory-Laflamme instability of a magnetically charged black string.
AdS Branes from Partial Breaking of Superconformal Symmetries
Ivanov, E.A.
2005-10-01
It is shown how the static-gauge world-volume superfield actions of diverse superbranes on the AdS{sub d+1} superbackgrounds can be systematically derived from nonlinear realizations of the appropriate AdS supersymmetries. The latter are treated as superconformal symmetries of flat Minkowski superspaces of the bosonic dimension d. Examples include the N = 1 AdS{sub 4} supermembrane, which is associated with the 1/2 partial breaking of the OSp(1|4) supersymmetry down to the N = 1, d = 3 Poincare supersymmetry, and the T-duality related L3-brane on AdS{sub 5} and scalar 3-brane on AdS{sub 5} x S{sup 1}, which are associated with two different patterns of 1/2 breaking of the SU(2, 2|1) supersymmetry. Another (closely related) topic is the AdS/CFT equivalence transformation. It maps the world-volume actions of the codimension-one AdS{sub d+1} (super)branes onto the actions of the appropriate Minkowski (super)conformal field theories in the dimension d.
Worldsheet dilatation operator for the AdS superstring
NASA Astrophysics Data System (ADS)
Ramírez, Israel; Vallilo, Brenno Carlini
2016-05-01
In this work we propose a systematic way to compute the logarithmic divergences of composite operators in the pure spinor description of the AdS 5 × S 5 superstring. The computations of these divergences can be summarized in terms of a dilatation operator acting on the local operators. We check our results with some important composite operators of the formalism.
Entanglement temperature and perturbed AdS3 geometry
NASA Astrophysics Data System (ADS)
Levine, G. C.; Caravan, B.
2016-06-01
Generalizing the first law of thermodynamics, the increase in entropy density δ S (x ) of a conformal field theory (CFT) is proportional to the increase in energy density, δ E (x ) , of a subsystem divided by a spatially dependent entanglement temperature, TE(x ) , a fixed parameter determined by the geometry of the subsystem, crossing over to thermodynamic temperature at high temperatures. In this paper we derive a generalization of the thermodynamic Clausius relation, showing that deformations of the CFT by marginal operators are associated with spatial temperature variations, δ TE(x ) , and spatial energy correlations play the role of specific heat. Using AdS/CFT duality we develop a relationship between a perturbation in the local entanglement temperature of the CFT and the perturbation of the bulk AdS metric. In two dimensions, we demonstrate a method through which direct diagonalizations of the boundary quantum theory may be used to construct geometric perturbations of AdS3 .
Variable variance Preisach model for multilayers with perpendicular magnetic anisotropy
NASA Astrophysics Data System (ADS)
Franco, A. F.; Gonzalez-Fuentes, C.; Morales, R.; Ross, C. A.; Dumas, R.; Åkerman, J.; Garcia, C.
2016-08-01
We present a variable variance Preisach model that fully accounts for the different magnetization processes of a multilayer structure with perpendicular magnetic anisotropy by adjusting the evolution of the interaction variance as the magnetization changes. We successfully compare in a quantitative manner the results obtained with this model to experimental hysteresis loops of several [CoFeB/Pd ] n multilayers. The effect of the number of repetitions and the thicknesses of the CoFeB and Pd layers on the magnetization reversal of the multilayer structure is studied, and it is found that many of the observed phenomena can be attributed to an increase of the magnetostatic interactions and subsequent decrease of the size of the magnetic domains. Increasing the CoFeB thickness leads to the disappearance of the perpendicular anisotropy, and such a minimum thickness of the Pd layer is necessary to achieve an out-of-plane magnetization.
Fidelity between Gaussian mixed states with quantum state quadrature variances
NASA Astrophysics Data System (ADS)
Hai-Long, Zhang; Chun, Zhou; Jian-Hong, Shi; Wan-Su, Bao
2016-04-01
In this paper, from the original definition of fidelity in a pure state, we first give a well-defined expansion fidelity between two Gaussian mixed states. It is related to the variances of output and input states in quantum information processing. It is convenient to quantify the quantum teleportation (quantum clone) experiment since the variances of the input (output) state are measurable. Furthermore, we also give a conclusion that the fidelity of a pure input state is smaller than the fidelity of a mixed input state in the same quantum information processing. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002) and the Foundation of Science and Technology on Information Assurance Laboratory (Grant No. KJ-14-001).
Variance reduction methods applied to deep-penetration problems
Cramer, S.N.
1984-01-01
All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course.
Climate variance influence on the non-stationary plankton dynamics.
Molinero, Juan Carlos; Reygondeau, Gabriel; Bonnet, Delphine
2013-08-01
We examined plankton responses to climate variance by using high temporal resolution data from 1988 to 2007 in the Western English Channel. Climate variability modified both the magnitude and length of the seasonal signal of sea surface temperature, as well as the timing and depth of the thermocline. These changes permeated the pelagic system yielding conspicuous modifications in the phenology of autotroph communities and zooplankton. The climate variance envelope, thus far little considered in climate-plankton studies, is closely coupled with the non-stationary dynamics of plankton, and sheds light on impending ecological shifts and plankton structural changes. Our study calls for the integration of the non-stationary relationship between climate and plankton in prognostic models on the productivity of marine ecosystems.
Fidelity between Gaussian mixed states with quantum state quadrature variances
NASA Astrophysics Data System (ADS)
Hai-Long, Zhang; Chun, Zhou; Jian-Hong, Shi; Wan-Su, Bao
2016-04-01
In this paper, from the original definition of fidelity in a pure state, we first give a well-defined expansion fidelity between two Gaussian mixed states. It is related to the variances of output and input states in quantum information processing. It is convenient to quantify the quantum teleportation (quantum clone) experiment since the variances of the input (output) state are measurable. Furthermore, we also give a conclusion that the fidelity of a pure input state is smaller than the fidelity of a mixed input state in the same quantum information processing. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002) and the Foundation of Science and Technology on Information Assurance Laboratory (Grant No. KJ-14-001).
Analysis of variance in spectroscopic imaging data from human tissues.
Kwak, Jin Tae; Reddy, Rohith; Sinha, Saurabh; Bhargava, Rohit
2012-01-17
The analysis of cell types and disease using Fourier transform infrared (FT-IR) spectroscopic imaging is promising. The approach lacks an appreciation of the limits of performance for the technology, however, which limits both researcher efforts in improving the approach and acceptance by practitioners. One factor limiting performance is the variance in data arising from biological diversity, measurement noise or from other sources. Here we identify the sources of variation by first employing a high throughout sampling platform of tissue microarrays (TMAs) to record a sufficiently large and diverse set data. Next, a comprehensive set of analysis of variance (ANOVA) models is employed to analyze the data. Estimating the portions of explained variation, we quantify the primary sources of variation, find the most discriminating spectral metrics, and recognize the aspects of the technology to improve. The study provides a framework for the development of protocols for clinical translation and provides guidelines to design statistically valid studies in the spectroscopic analysis of tissue.
Female copying increases the variance in male mating success.
Wade, M J; Pruett-Jones, S G
1990-08-01
Theoretical models of sexual selection assume that females choose males independently of the actions and choice of other individual females. Variance in male mating success in promiscuous species is thus interpreted as a result of phenotypic differences among males which females perceive and to which they respond. Here we show that, if some females copy the behavior of other females in choosing mates, the variance in male mating success and therefore the opportunity for sexual selection is greatly increased. Copying behavior is most likely in non-resource-based harem and lek mating systems but may occur in polygynous, territorial systems as well. It can be shown that copying behavior by females is an adaptive alternative to random choice whenever there is a cost to mate choice. We develop a statistical means of estimating the degree of female copying in natural populations where it occurs. PMID:2377613
Compounding approach for univariate time series with nonstationary variances
NASA Astrophysics Data System (ADS)
Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich
2015-12-01
A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.
A Variance Based Active Learning Approach for Named Entity Recognition
NASA Astrophysics Data System (ADS)
Hassanzadeh, Hamed; Keyvanpour, Mohammadreza
The cost of manually annotating corpora is one of the significant issues in many text based tasks such as text mining, semantic annotation and generally information extraction. Active Learning is an approach that deals with reduction of labeling costs. In this paper we proposed an effective active learning approach based on minimal variance that reduces manual annotation cost by using a small number of manually labeled examples. In our approach we use a confidence measure based on the model's variance that reaches a considerable accuracy for annotating entities. Conditional Random Field (CRF) is chosen as the underlying learning model due to its promising performance in many sequence labeling tasks. The experiments show that the proposed method needs considerably fewer manual labeled samples to produce a desirable result.
No evidence for anomalously low variance circles on the sky
Moss, Adam; Scott, Douglas; Zibin, James P. E-mail: dscott@phas.ubc.ca
2011-04-01
In a recent paper, Gurzadyan and Penrose claim to have found directions on the sky centred on which are circles of anomalously low variance in the cosmic microwave background (CMB). These features are presented as evidence for a particular picture of the very early Universe. We attempted to repeat the analysis of these authors, and we can indeed confirm that such variations do exist in the temperature variance for annuli around points in the data. However, we find that this variation is entirely expected in a sky which contains the usual CMB anisotropies. In other words, properly simulated Gaussian CMB data contain just the sorts of variations claimed. Gurzadyan and Penrose have not found evidence for pre-Big Bang phenomena, but have simply re-discovered that the CMB contains structure.
A surface layer variance heat budget for ENSO
NASA Astrophysics Data System (ADS)
Boucharel, Julien; Timmermann, Axel; Santoso, Agus; England, Matthew H.; Jin, Fei-Fei; Balmaseda, Magdalena A.
2015-05-01
Characteristics of the El Niño-Southern Oscillation (ENSO), such as frequency, propagation, spatial extent, and amplitude, strongly depend on the climatological background state of the tropical Pacific. Multidecadal changes in the ocean mean state are hence likely to modulate ENSO properties. To better link background state variations with low-frequency amplitude changes of ENSO, we develop a diagnostic framework that determines locally the contributions of different physical feedback terms on the ocean surface temperature variance. Our analysis shows that multidecadal changes of ENSO variance originate from the delicate balance between the background-state-dependent positive thermocline feedback and the atmospheric damping of sea surface temperatures anomalies. The role of higher-order processes and atmospheric and oceanic nonlinearities is also discussed. The diagnostic tool developed here can be easily applied to other tropical ocean areas and climate phenomena.
Response variance in functional maps: neural darwinism revisited.
Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei
2013-01-01
The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.
The Misattribution of Summers in Teacher Value-Added
ERIC Educational Resources Information Center
Atteberry, Allison
2012-01-01
This paper investigates the extent to which spring-to-spring testing timelines bias teacher value-added as a result of conflating summer and school-year learning. Using a unique dataset that contains both fall and spring standardized test scores, the author examines the patterns in school-year versus summer learning. She estimates value-added…
Symbols are not uniquely human.
Ribeiro, Sidarta; Loula, Angelo; de Araújo, Ivan; Gudwin, Ricardo; Queiroz, João
2007-01-01
Modern semiotics is a branch of logics that formally defines symbol-based communication. In recent years, the semiotic classification of signs has been invoked to support the notion that symbols are uniquely human. Here we show that alarm-calls such as those used by African vervet monkeys (Cercopithecus aethiops), logically satisfy the semiotic definition of symbol. We also show that the acquisition of vocal symbols in vervet monkeys can be successfully simulated by a computer program based on minimal semiotic and neurobiological constraints. The simulations indicate that learning depends on the tutor-predator ratio, and that apprentice-generated auditory mistakes in vocal symbol interpretation have little effect on the learning rates of apprentices (up to 80% of mistakes are tolerated). In contrast, just 10% of apprentice-generated visual mistakes in predator identification will prevent any vocal symbol to be correctly associated with a predator call in a stable manner. Tutor unreliability was also deleterious to vocal symbol learning: a mere 5% of "lying" tutors were able to completely disrupt symbol learning, invariably leading to the acquisition of incorrect associations by apprentices. Our investigation corroborates the existence of vocal symbols in a non-human species, and indicates that symbolic competence emerges spontaneously from classical associative learning mechanisms when the conditioned stimuli are self-generated, arbitrary and socially efficacious. We propose that more exclusive properties of human language, such as syntax, may derive from the evolution of higher-order domains for neural association, more removed from both the sensory input and the motor output, able to support the gradual complexification of grammatical categories into syntax.
Constraining the local variance of H0 from directional analyses
NASA Astrophysics Data System (ADS)
Bengaly, C. A. P., Jr.
2016-04-01
We evaluate the local variance of the Hubble Constant H0 with low-z Type Ia Supernovae (SNe). Our analyses are performed using a hemispherical comparison method in order to test whether taking the bulk flow motion into account can reconcile the measurement of the Hubble Constant H0 from standard candles (H0 = 73.8±2.4 km s-1 Mpc -1) with that of the Planck's Cosmic Microwave Background data (H0 = 67.8 ± 0.9km s-1 Mpc-1). We obtain that H0 ranges from 68.9±0.5 km s-1 Mpc-1 to 71.2±0.7 km s-1 Mpc-1 through the celestial sphere (1σ uncertainty), implying a Hubble Constant maximal variance of δH0 = (2.30±0.86) km s-1 Mpc-1 towards the (l,b) = (315°,27°) direction. Interestingly, this result agrees with the bulk flow direction estimates found in the literature, as well as previous evaluations of the H0 variance due to the presence of nearby inhomogeneities. We assess the statistical significance of this result with different prescriptions of Monte Carlo simulations, obtaining moderate statistical significance, i.e., 68.7% confidence level (CL) for such variance. Furthermore, we test the hypothesis of a higher H0 value in the presence of a bulk flow velocity dipole, finding some evidence for this result which, however, cannot be claimed to be significant due to the current large uncertainty in the SNe distance modulus. Then, we conclude that the tension between different H0 determinations can plausibly be caused to the bulk flow motion of the local Universe, even though the current incompleteness of the SNe data set, both in terms of celestial coverage and distance uncertainties, does not allow a high statistical significance for these results or a definitive conclusion about this issue.
Analysis and application of minimum variance discrete time system identification
NASA Technical Reports Server (NTRS)
Kaufman, H.; Kotob, S.
1975-01-01
An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.
End-state comfort and joint configuration variance during reaching.
Solnik, Stanislaw; Pazin, Nemanja; Coelho, Chase J; Rosenbaum, David A; Scholz, John P; Zatsiorsky, Vladimir M; Latash, Mark L
2013-03-01
This study joined two approaches to motor control. The first approach comes from cognitive psychology and is based on the idea that goal postures and movements are chosen to satisfy task-specific constraints. The second approach comes from the principle of motor abundance and is based on the idea that control of apparently redundant systems is associated with the creation of multi-element synergies stabilizing important performance variables. The first approach has been tested by relying on psychophysical ratings of comfort. The second approach has been tested by estimating variance along different directions in the space of elemental variables such as joint postures. The two approaches were joined here. Standing subjects performed series of movements in which they brought a hand-held pointer to each of four targets oriented within a frontal plane, close to or far from the body. The subjects were asked to rate the comfort of the final postures, and the variance of their joint configurations during the steady state following pointing was quantified with respect to pointer endpoint position and pointer orientation. The subjects showed consistent patterns of comfort ratings among the targets, and all movements were characterized by multi-joint synergies stabilizing both pointer endpoint position and orientation. Contrary to what was expected, less comfortable postures had higher joint configuration variance than did more comfortable postures without major changes in the synergy indices. Multi-joint synergies stabilized the pointer position and orientation similarly across a range of comfortable/uncomfortable postures. The results are interpreted in terms conducive to the two theoretical frameworks underlying this work, one focusing on comfort ratings reflecting mean postures adopted for different targets and the other focusing on indices of joint configuration variance. PMID:23288326
End-state comfort and joint configuration variance during reaching
Solnik, Stanislaw; Pazin, Nemanja; Coelho, Chase J.; Rosenbaum, David A.; Scholz, John P.; Zatsiorsky, Vladimir M.; Latash, Mark L.
2013-01-01
This study joined two approaches to motor control. The first approach comes from cognitive psychology and is based on the idea that goal postures and movements are chosen to satisfy task-specific constraints. The second approach comes from the principle of motor abundance and is based on the idea that control of apparently redundant systems is associated with the creation of multi-element synergies stabilizing important performance variables. The first approach has been tested by relying on psychophysical ratings of comfort. The second approach has been tested by estimating variance along different directions in the space of elemental variables such as joint postures. The two approaches were joined here. Standing subjects performed series of movements in which they brought a hand-held pointer to each of four targets oriented within a frontal plane, close to or far from the body. The subjects were asked to rate the comfort of the final postures, and the variance of their joint configurations during the steady state following pointing was quantified with respect to pointer endpoint position and pointer orientation. The subjects showed consistent patterns of comfort ratings among the targets, and all movements were characterized by multi-joint synergies stabilizing both pointer endpoint position and orientation. Contrary to what was expected, less comfortable postures had higher joint configuration variance than did more comfortable postures without major changes in the synergy indices. Multi-joint synergies stabilized the pointer position and orientation similarly across a range of comfortable/uncomfortable postures. The results are interpreted in terms conducive to the two theoretical frameworks underlying this work, one focusing on comfort ratings reflecting mean postures adopted for different targets and the other focusing on indices of joint configuration variance. PMID:23288326
Analysis of Variance in the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
Deloach, Richard
2010-01-01
This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.
Unique contributions of metacognition and cognition to depressive symptoms.
Yilmaz, Adviye Esin; Gençöz, Tülin; Wells, Adrian
2015-01-01
This study attempts to examine the unique contributions of "cognitions" or "metacognitions" to depressive symptoms while controlling for their intercorrelations and comorbid anxiety. Two-hundred-and-fifty-one university students participated in the study. Two complementary hierarchical multiple regression analyses were performed, in which symptoms of depression were regressed on the dysfunctional attitudes (DAS-24 subscales) and metacognition scales (Negative Beliefs about Rumination Scale [NBRS] and Positive Beliefs about Rumination Scale [PBRS]). Results showed that both NBRS and PBRS individually explained a significant amount of variance in depressive symptoms above and beyond dysfunctional schemata while controlling for anxiety. Although dysfunctional attitudes as a set significantly predicted depressive symptoms after anxiety and metacognitions were controlled for, they were weaker than metacognitive variables and none of the DAS-24 subscales contributed individually. Metacognitive beliefs about ruminations appeared to contribute more to depressive symptoms than dysfunctional beliefs in the "cognitive" domain.
White matter morphometric changes uniquely predict children's reading acquisition.
Myers, Chelsea A; Vandermosten, Maaike; Farris, Emily A; Hancock, Roeland; Gimenez, Paul; Black, Jessica M; Casto, Brandi; Drahos, Miroslav; Tumber, Mandeep; Hendren, Robert L; Hulme, Charles; Hoeft, Fumiko
2014-10-01
This study examined whether variations in brain development between kindergarten and Grade 3 predicted individual differences in reading ability at Grade 3. Structural MRI measurements indicated that increases in the volume of two left temporo-parietal white matter clusters are unique predictors of reading outcomes above and beyond family history, socioeconomic status, and cognitive and preliteracy measures at baseline. Using diffusion MRI, we identified the left arcuate fasciculus and superior corona radiata as key fibers within the two clusters. Bias-free regression analyses using regions of interest from prior literature revealed that volume changes in temporo-parietal white matter, together with preliteracy measures, predicted 56% of the variance in reading outcomes. Our findings demonstrate the important contribution of developmental differences in areas of left dorsal white matter, often implicated in phonological processing, as a sensitive early biomarker for later reading abilities, and by extension, reading difficulties.
White matter morphometric changes uniquely predict children's reading acquisition.
Myers, Chelsea A; Vandermosten, Maaike; Farris, Emily A; Hancock, Roeland; Gimenez, Paul; Black, Jessica M; Casto, Brandi; Drahos, Miroslav; Tumber, Mandeep; Hendren, Robert L; Hulme, Charles; Hoeft, Fumiko
2014-10-01
This study examined whether variations in brain development between kindergarten and Grade 3 predicted individual differences in reading ability at Grade 3. Structural MRI measurements indicated that increases in the volume of two left temporo-parietal white matter clusters are unique predictors of reading outcomes above and beyond family history, socioeconomic status, and cognitive and preliteracy measures at baseline. Using diffusion MRI, we identified the left arcuate fasciculus and superior corona radiata as key fibers within the two clusters. Bias-free regression analyses using regions of interest from prior literature revealed that volume changes in temporo-parietal white matter, together with preliteracy measures, predicted 56% of the variance in reading outcomes. Our findings demonstrate the important contribution of developmental differences in areas of left dorsal white matter, often implicated in phonological processing, as a sensitive early biomarker for later reading abilities, and by extension, reading difficulties. PMID:25212581
Reduced Variance for Material Sources in Implicit Monte Carlo
Urbatsch, Todd J.
2012-06-25
Implicit Monte Carlo (IMC), a time-implicit method due to Fleck and Cummings, is used for simulating supernovae and inertial confinement fusion (ICF) systems where x-rays tightly and nonlinearly interact with hot material. The IMC algorithm represents absorption and emission within a timestep as an effective scatter. Similarly, the IMC time-implicitness splits off a portion of a material source directly into the radiation field. We have found that some of our variance reduction and particle management schemes will allow large variances in the presence of small, but important, material sources, as in the case of ICF hot electron preheat sources. We propose a modification of our implementation of the IMC method in the Jayenne IMC Project. Instead of battling the sampling issues associated with a small source, we bypass the IMC implicitness altogether and simply deterministically update the material state with the material source if the temperature of the spatial cell is below a user-specified cutoff. We describe the modified method and present results on a test problem that show the elimination of variance for small sources.
Estimating Predictive Variance for Statistical Gas Distribution Modelling
Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo
2009-05-23
Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.
A candidate mechanism underlying the variance of interictal spike propagation
Sabolek, Helen R; Swiercz, Waldemar B.; Lillis, Kyle; Cash, Sydney S.; Huberfeld, Gilles; Zhao, Grace; Marie, Linda Ste.; Clemenceau, Stéphane; Barsh, Greg; Miles, Richard; Staley, Kevin J.
2012-01-01
Synchronous activation of neural networks is an important physiological mechanism, and dysregulation of synchrony forms the basis of epilepsy. We analyzed the propagation of synchronous activity through chronically epileptic neural networks. Electrocortigraphic recordings from epileptic patients demonstrate remarkable variance in the pathways of propagation between sequential interictal spikes (IIS). Calcium imaging in chronically epileptic slice cultures demonstrates that pathway variance depends on the presence of GABAergic inhibition and that spike propagation becomes stereotyped following GABA-R blockade. Computer modeling suggests that GABAergic quenching of local network activations leaves behind regions of refractory neurons, whose late recruitment forms the anatomical basis of variability during subsequent network activation. Targeted path scanning of slice cultures confirmed local activations, while ex vivo recordings of human epileptic tissue confirmed the dependence of interspike variance on GABA-mediated inhibition. These data support the hypothesis that the paths by which synchronous activity spread through an epileptic network change with each activation, based on the recent history of localized activity that has been successfully inhibited. PMID:22378874
Genetic variance of tolerance and the toxicant threshold model.
Tanaka, Yoshinari; Mano, Hiroyuki; Tatsuta, Haruki
2012-04-01
A statistical genetics method is presented for estimating the genetic variance (heritability) of tolerance to pollutants on the basis of a standard acute toxicity test conducted on several isofemale lines of cladoceran species. To analyze the genetic variance of tolerance in the case when the response is measured as a few discrete states (quantal endpoints), the authors attempted to apply the threshold character model in quantitative genetics to the threshold model separately developed in ecotoxicology. The integrated threshold model (toxicant threshold model) assumes that the response of a particular individual occurs at a threshold toxicant concentration and that the individual tolerance characterized by the individual's threshold value is determined by genetic and environmental factors. As a case study, the heritability of tolerance to p-nonylphenol in the cladoceran species Daphnia galeata was estimated by using the maximum likelihood method and nested analysis of variance (ANOVA). Broad-sense heritability was estimated to be 0.199 ± 0.112 by the maximum likelihood method and 0.184 ± 0.089 by ANOVA; both results implied that the species examined had the potential to acquire tolerance to this substance by evolutionary change.
Estimating Predictive Variance for Statistical Gas Distribution Modelling
NASA Astrophysics Data System (ADS)
Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo
2009-05-01
Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.
VAPOR: variance-aware per-pixel optimal resource allocation.
Eisenberg, Yiftach; Zhai, Fan; Pappas, Thrasyvoulos N; Berry, Randall; Katsaggelos, Aggelos K
2006-02-01
Characterizing the video quality seen by an end-user is a critical component of any video transmission system. In packet-based communication systems, such as wireless channels or the Internet, packet delivery is not guaranteed. Therefore, from the point-of-view of the transmitter, the distortion at the receiver is a random variable. Traditional approaches have primarily focused on minimizing the expected value of the end-to-end distortion. This paper explores the benefits of accounting for not only the mean, but also the variance of the end-to-end distortion when allocating limited source and channel resources. By accounting for the variance of the distortion, the proposed approach increases the reliability of the system by making it more likely that what the end-user sees, closely resembles the mean end-to-end distortion calculated at the transmitter. Experimental results demonstrate that variance-aware resource allocation can help limit error propagation and is more robust to channel-mismatch than approaches whose goal is to strictly minimize the expected distortion. PMID:16479799
Value-Added Methods & Their Application to Teachers and District Decisions about Teaching
ERIC Educational Resources Information Center
Villar, Anthony G.
2011-01-01
This dissertation project in applied research uses value added models to estimate learning effects in classrooms in one district under two conditions, schools piloting a reorganization and schools not yet under reorganization. The project addresses one of the current impasses in deciding how to take advantage of the variance among teachers…
ERIC Educational Resources Information Center
Young, David G.
1983-01-01
Ad-hoc committees may be symbolic, informational, or action committees. A literature survey indicates such committees' structural components include a suprasystem and three subsystems involving linkages, production, and implementation. Other variables include size, personal factors, and timing. All the factors carry implications about ad-hoc…
Putka, Dan J; Hoffman, Brian J
2013-01-01
Though considerable research has evaluated the functioning of assessment center (AC) ratings, surprisingly little research has articulated and uniquely estimated the components of reliable and unreliable variance that underlie such ratings. The current study highlights limitations of existing research for estimating components of reliable and unreliable variance in AC ratings. It provides a comprehensive empirical decomposition of variance in AC ratings that: (a) explicitly accounts for assessee-, dimension-, exercise-, and assessor-related effects, (b) does so with 3 large sets of operational data from a multiyear AC program, and (c) avoids many analytic limitations and confounds that have plagued the AC literature to date. In doing so, results show that (a) the extant AC literature has masked the contribution of sizable, substantively meaningful sources of variance in AC ratings, (b) various forms of assessor bias largely appear trivial, and (c) there is far more systematic, nuanced variance present in AC ratings than previous research indicates. Furthermore, this study also illustrates how the composition of reliable and unreliable variance heavily depends on the level to which assessor ratings are aggregated (e.g., overall AC-level, dimension-level, exercise-level) and the generalizations one desires to make based on those ratings. The implications of this study for future AC research and practice are discussed. PMID:23244226
Lorentzian AdS geometries, wormholes, and holography
Arias, Raul E.; Silva, Guillermo A.; Botta Cantcheff, Marcelo
2011-03-15
We investigate the structure of two-point functions for the quantum field theory dual to an asymptotically Lorentzian Anti de Sitter (AdS) wormhole. The bulk geometry is a solution of five-dimensional second-order Einstein-Gauss-Bonnet gravity and causally connects two asymptotically AdS spacetimes. We revisit the Gubser-Klebanov-Polyakov-Witten prescription for computing two-point correlation functions for dual quantum field theories operators O in Lorentzian signature and we propose to express the bulk fields in terms of the independent boundary values {phi}{sub 0}{sup {+-}} at each of the two asymptotic AdS regions; along the way we exhibit how the ambiguity of normalizable modes in the bulk, related to initial and final states, show up in the computations. The independent boundary values are interpreted as sources for dual operators O{sup {+-}} and we argue that, apart from the possibility of entanglement, there exists a coupling between the degrees of freedom living at each boundary. The AdS{sub 1+1} geometry is also discussed in view of its similar boundary structure. Based on the analysis, we propose a very simple geometric criterion to distinguish coupling from entanglement effects among two sets of degrees of freedom associated with each of the disconnected parts of the boundary.
One-loop diagrams in AdS space
Hung Lingyan; Shang Yanwen
2011-01-15
We study the complex scalar loop corrections to the boundary-boundary gauge two-point function in pure AdS space in Poincare coordinates, in the presence of boundary quadratic perturbations to the scalar. These perturbations correspond to double-trace perturbations in the dual CFT and modify the boundary conditions of the bulk scalars in AdS. We find that, in addition to the usual UV divergences, the one-loop calculation suffers from a divergence originating in the limit as the loop vertices approach the AdS horizon. We show that this type of divergence is independent of the boundary coupling; making use of this we extract the finite relative variation of the imaginary part of the loop via Cutkosky rules as the boundary perturbation varies. Applying our methods to compute the effects of a time-dependent impurity to the conductivities using the replica trick in AdS/CFT, we find that generally an IR-relevant disorder reduces the conductivity and that in the extreme low frequency limit the correction due to the impurities overwhelms the planar CFT result even though it is supposedly 1/N{sup 2} suppressed. We also comment on the more physical scenario of a time-independent impurity.
ADS on WWW: Doubling Yearly for Five Years
NASA Astrophysics Data System (ADS)
Kurtz, M. J.; Eichhorn, G.; Accomazzi, A.; Grant, C. S.; Murray, S. S.
1998-12-01
It is now five years since the NASA ADS Abstract Service became available on the World Wide Web, in late winter of 1994. Following the explosive growth of the service (when compared with the old propriatory network access system) in the early months of WWW service, ADS growth has settled to doubling yearly. Currently ADS users make 440,000 queries per month, and receive 8,000,000 bibliographic references and 70,000 full-text articles, as well as abstracts, citation histories, links to data, and links to other data centers. Of the 70,000 full-text articles accessed through ADS each month, already 30% are via pointers to the electronic journals. This number is certain to increase. It is difficult to determine the exact number of ADS users. We track usage by the number of unique ``cookies'' which access ADS, and by the number of unique IP addresses. There are difficulties with each technique. In addition many non-astronomers find ADS through portal sites like Yahoo, which skews the statistics. 10,000 unique cookies access the full-text articles each month, 17,000 make queries, and 30,000 visit the site. 91% of full-text users have cookies, but only 65% of site visitors. From another perspective the number of IP addresses from a single typical research site (STScI) which access the full-text data is within 5% of the number of unique cookies assiociated with full-text use from stsci.edu, and also within 5% of the number of AAS members listing an STScI address. The number of unique IP addresses from STScI which make any sort of query to ADS is 40% higher than this. Those who access the full-text average one article per day, those who make queries average two per day. We believe nearly all active astronomy researchers, as well as students and affiliated professionals use ADS on a regular basis.
Estimation of Noise-Free Variance to Measure Heterogeneity
Winkler, Tilo; Melo, Marcos F. Vidal; Degani-Costa, Luiza H.; Harris, R. Scott; Correia, John A.; Musch, Guido; Venegas, Jose G.
2015-01-01
Variance is a statistical parameter used to characterize heterogeneity or variability in data sets. However, measurements commonly include noise, as random errors superimposed to the actual value, which may substantially increase the variance compared to a noise-free data set. Our aim was to develop and validate a method to estimate noise-free spatial heterogeneity of pulmonary perfusion using dynamic positron emission tomography (PET) scans. On theoretical grounds, we demonstrate a linear relationship between the total variance of a data set derived from averages of n multiple measurements, and the reciprocal of n. Using multiple measurements with varying n yields estimates of the linear relationship including the noise-free variance as the constant parameter. In PET images, n is proportional to the number of registered decay events, and the variance of the image is typically normalized by the square of its mean value yielding a coefficient of variation squared (CV2). The method was evaluated with a Jaszczak phantom as reference spatial heterogeneity (CVr2) for comparison with our estimate of noise-free or ‘true’ heterogeneity (CVt2). We found that CVt2 was only 5.4% higher than CVr2. Additional evaluations were conducted on 38 PET scans of pulmonary perfusion using 13NN-saline injection. The mean CVt2 was 0.10 (range: 0.03–0.30), while the mean CV2 including noise was 0.24 (range: 0.10–0.59). CVt2 was in average 41.5% of the CV2 measured including noise (range: 17.8–71.2%). The reproducibility of CVt2 was evaluated using three repeated PET scans from five subjects. Individual CVt2 were within 16% of each subject's mean and paired t-tests revealed no difference among the results from the three consecutive PET scans. In conclusion, our method provides reliable noise-free estimates of CVt2 in PET scans, and may be useful for similar statistical problems in experimental data. PMID:25906374
Kolmogorov-Zakharov spectrum in AdS gravitational collapse.
de Oliveira, H P; Pando Zayas, Leopoldo A; Rodrigues, E L
2013-08-01
We study black hole formation during the gravitational collapse of a massless scalar field in asymptotically D-dimensional anti-de Sitter AdS(D) spacetimes for D = 4, 5. We conclude that spherically symmetric gravitational collapse in asymptotically AdS spaces is turbulent and characterized by a Kolmogorov-Zakharov spectrum. Namely, we find that after an initial period of weakly nonlinear evolution, there is a regime where the power spectrum of the Ricci scalar evolves as ω(-s) with the frequency, ω, and s ≈ 1.7 ± 0.1.
Semiclassical Virasoro blocks from AdS3 gravity
NASA Astrophysics Data System (ADS)
Hijano, Eliot; Kraus, Per; Perlmutter, Eric; Snively, River
2015-12-01
We present a unified framework for the holographic computation of Virasoro conformal blocks at large central charge. In particular, we provide bulk constructions that correctly reproduce all semiclassical Virasoro blocks that are known explicitly from conformal field theory computations. The results revolve around the use of geodesic Witten diagrams, recently introduced in [1], evaluated in locally AdS3 geometries generated by backreaction of heavy operators. We also provide an alternative computation of the heavy-light semiclassical block — in which two external operators become parametrically heavy — as a certain scattering process involving higher spin gauge fields in AdS3; this approach highlights the chiral nature of Virasoro blocks. These techniques may be systematically extended to compute corrections to these blocks and to interpolate amongst the different semiclassical regimes.
Nurse Value-Added and Patient Outcomes in Acute Care
Yakusheva, Olga; Lindrooth, Richard; Weiss, Marianne
2014-01-01
Objective The aims of the study were to (1) estimate the relative nurse effectiveness, or individual nurse value-added (NVA), to patients’ clinical condition change during hospitalization; (2) examine nurse characteristics contributing to NVA; and (3) estimate the contribution of value-added nursing care to patient outcomes. Data Sources/Study Setting Electronic data on 1,203 staff nurses matched with 7,318 adult medical–surgical patients discharged between July 1, 2011 and December 31, 2011 from an urban Magnet-designated, 854-bed teaching hospital. Study Design Retrospective observational longitudinal analysis using a covariate-adjustment value-added model with nurse fixed effects. Data Collection/Extraction Methods Data were extracted from the study hospital's electronic patient records and human resources databases. Principal Findings Nurse effects were jointly significant and explained 7.9 percent of variance in patient clinical condition change during hospitalization. NVA was positively associated with having a baccalaureate degree or higher (0.55, p = .04) and expertise level (0.66, p = .03). NVA contributed to patient outcomes of shorter length of stay and lower costs. Conclusions Nurses differ in their value-added to patient outcomes. The ability to measure individual nurse relative value-added opens the possibility for development of performance metrics, performance-based rankings, and merit-based salary schemes to improve patient outcomes and reduce costs. PMID:25256089
Conserved higher-spin charges in AdS4
NASA Astrophysics Data System (ADS)
Gelfond, O. A.; Vasiliev, M. A.
2016-03-01
Gauge invariant conserved conformal currents built from massless fields of all spins in 4d Minkowski space-time and AdS4 are described in the unfolded dynamics approach. The current cohomology associated with non-zero conserved charges is found. The resulting list of charges is shown to match the space of parameters of the conformal higher-spin symmetry algebra in four dimensions.
On information loss in AdS3/CFT2
Fitzpatrick, A. Liam; Kaplan, Jared; Li, Daliang; Wang, Junpu
2016-05-18
We discuss information loss from black hole physics in AdS3, focusing on two sharp signatures infecting CFT2 correlators at large central charge c: ‘forbidden singularities’ arising from Euclidean-time periodicity due to the effective Hawking temperature, and late-time exponential decay in the Lorentzian region. We study an infinite class of examples where forbidden singularities can be resolved by non-perturbative effects at finite c, and we show that the resolution has certain universal features that also apply in the general case. Analytically continuing to the Lorentzian regime, we find that the non-perturbative effects that resolve forbidden singularities qualitatively change the behavior ofmore » correlators at times t ~SBH, the black hole entropy. This may resolve the exponential decay of correlators at late times in black hole backgrounds. By Borel resumming the 1/c expansion of exact examples, we explicitly identify ‘information-restoring’ effects from heavy states that should correspond to classical solutions in AdS3. Lastly, our results suggest a line of inquiry towards a more precise formulation of the gravitational path integral in AdS3.« less
Sanz, E.; Voss, C.I.
2006-01-01
Inverse modeling studies employing data collected from the classic Henry seawater intrusion problem give insight into several important aspects of inverse modeling of seawater intrusion problems and effective measurement strategies for estimation of parameters for seawater intrusion. Despite the simplicity of the Henry problem, it embodies the behavior of a typical seawater intrusion situation in a single aquifer. Data collected from the numerical problem solution are employed without added noise in order to focus on the aspects of inverse modeling strategies dictated by the physics of variable-density flow and solute transport during seawater intrusion. Covariances of model parameters that can be estimated are strongly dependent on the physics. The insights gained from this type of analysis may be directly applied to field problems in the presence of data errors, using standard inverse modeling approaches to deal with uncertainty in data. Covariance analysis of the Henry problem indicates that in order to generally reduce variance of parameter estimates, the ideal places to measure pressure are as far away from the coast as possible, at any depth, and the ideal places to measure concentration are near the bottom of the aquifer between the center of the transition zone and its inland fringe. These observations are located in and near high-sensitivity regions of system parameters, which may be identified in a sensitivity analysis with respect to several parameters. However, both the form of error distribution in the observations and the observation weights impact the spatial sensitivity distributions, and different choices for error distributions or weights can result in significantly different regions of high sensitivity. Thus, in order to design effective sampling networks, the error form and weights must be carefully considered. For the Henry problem, permeability and freshwater inflow can be estimated with low estimation variance from only pressure or only
ERIC Educational Resources Information Center
Liu, Duo; Chen, Xi; Chung, Kevin K. H.
2015-01-01
This study examined the relation between the performance in a visual search task and reading ability in 92 third-grade Hong Kong Chinese children. The visual search task, which is considered a measure of visual-spatial attention, accounted for unique variance in Chinese character reading after controlling for age, nonverbal intelligence,…
Monte Carlo calculation of specific absorbed fractions: variance reduction techniques
NASA Astrophysics Data System (ADS)
Díaz-Londoño, G.; García-Pareja, S.; Salvat, F.; Lallena, A. M.
2015-04-01
The purpose of the present work is to calculate specific absorbed fractions using variance reduction techniques and assess the effectiveness of these techniques in improving the efficiency (i.e. reducing the statistical uncertainties) of simulation results in cases where the distance between the source and the target organs is large and/or the target organ is small. The variance reduction techniques of interaction forcing and an ant colony algorithm, which drives the application of splitting and Russian roulette, were applied in Monte Carlo calculations performed with the code penelope for photons with energies from 30 keV to 2 MeV. In the simulations we used a mathematical phantom derived from the well-known MIRD-type adult phantom. The thyroid gland was assumed to be the source organ and urinary bladder, testicles, uterus and ovaries were considered as target organs. Simulations were performed, for each target organ and for photons with different energies, using these variance reduction techniques, all run on the same processor and during a CPU time of 1.5 · 105 s. For energies above 100 keV both interaction forcing and the ant colony method allowed reaching relative uncertainties of the average absorbed dose in the target organs below 4% in all studied cases. When these two techniques were used together, the uncertainty was further reduced, by a factor of 0.5 or less. For photons with energies below 100 keV, an adapted initialization of the ant colony algorithm was required. By using interaction forcing and the ant colony algorithm, realistic values of the specific absorbed fractions can be obtained with relative uncertainties small enough to permit discriminating among simulations performed with different Monte Carlo codes and phantoms. The methodology described in the present work can be employed to calculate specific absorbed fractions for arbitrary arrangements, i.e. energy spectrum of primary radiation, phantom model and source and target organs.
An Empirical Temperature Variance Source Model in Heated Jets
NASA Technical Reports Server (NTRS)
Khavaran, Abbas; Bridges, James
2012-01-01
An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.
FMRI group analysis combining effect estimates and their variances
Chen, Gang; Saad, Ziad S.; Nath, Audrey R.; Beauchamp, Michael S.; Cox, Robert W.
2012-01-01
Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an approach that does not make such strong assumptions, and present a computationally efficient frequentist approach to FMRI group analysis, which we term mixed-effects multilevel analysis (MEMA), that incorporates both the variability across subjects and the precision estimate of each effect of interest from individual subject analyses. On average, the more accurate tests result in higher statistical power, especially when conventional variance assumptions do not hold, or in the presence of outliers. In addition, various heterogeneity measures are available with MEMA that may assist the investigator in further improving the modeling. Our method allows group effect t-tests and comparisons among conditions and among groups. In addition, it has the capability to incorporate subject-specific covariates such as age, IQ, or behavioral data. Simulations were performed to illustrate power comparisons and the capability of controlling type I errors among various significance testing methods, and the results indicated that the testing statistic we adopted struck a good balance between power gain and type I error control. Our approach is instantiated in an open-source, freely distributed program that may be used on any dataset stored in the universal neuroimaging file transfer (NIfTI) format. To date, the main impediment for more accurate testing that incorporates both within- and cross-subject variability has been the high computational cost. Our efficient implementation makes this approach
Regression between earthquake magnitudes having errors with known variances
NASA Astrophysics Data System (ADS)
Pujol, Jose
2016-07-01
Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.
THE COLUMN DENSITY VARIANCE-M{sub s} RELATIONSHIP
Burkhart, Blakesley; Lazarian, A.
2012-08-10
Although there is a wealth of column density tracers for both the molecular and diffuse interstellar medium, there are few observational studies investigating the relationship between the density variance ({sigma}{sup 2}) and the sonic Mach number (M{sub s}). This is in part due to the fact that the {sigma}{sup 2}-M{sub s} relationship is derived, via MHD simulations, for the three-dimensional (3D) density variance only, which is not a direct observable. We investigate the utility of a 2D column density {sigma}{sub {Sigma}/{Sigma}0}{sup 2}-M{sub s} relationship using solenoidally driven isothermal MHD simulations and find that the best fit follows closely the form of the 3D density {sigma}{sub {rho}/{rho}0}{sup 2}-M{sub s} trend but includes a scaling parameter A such that {sigma}{sub ln({Sigma}/{Sigma}o)} = A x ln(1+b{sup 2} M{sub s}{sup 2}), where A = 0.11 and b = 1/3. This relation is consistent with the observational data reported for the Taurus and IC 5146 molecular clouds with b = 0.5 and A = 0.16, and b = 0.5 and A = 0.12, respectively. These results open up the possibility of using the 2D column density values of {sigma}{sup 2} for investigations of the relation between the sonic Mach number and the probability distribution function (PDF) variance in addition to existing PDF sonic Mach number relations.
Fringe biasing: A variance reduction technique for optically thick meshes
Smedley-Stevenson, R. P.
2013-07-01
Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)
Multi-observable Uncertainty Relations in Product Form of Variances
NASA Astrophysics Data System (ADS)
Qin, Hui-Hui; Fei, Shao-Ming; Li-Jost, Xianqing
2016-08-01
We investigate the product form uncertainty relations of variances for n (n ≥ 3) quantum observables. In particular, tight uncertainty relations satisfied by three observables has been derived, which is shown to be better than the ones derived from the strengthened Heisenberg and the generalized Schrödinger uncertainty relations, and some existing uncertainty relation for three spin-half operators. Uncertainty relation of arbitrary number of observables is also derived. As an example, the uncertainty relation satisfied by the eight Gell-Mann matrices is presented.
Multi-observable Uncertainty Relations in Product Form of Variances.
Qin, Hui-Hui; Fei, Shao-Ming; Li-Jost, Xianqing
2016-01-01
We investigate the product form uncertainty relations of variances for n (n ≥ 3) quantum observables. In particular, tight uncertainty relations satisfied by three observables has been derived, which is shown to be better than the ones derived from the strengthened Heisenberg and the generalized Schrödinger uncertainty relations, and some existing uncertainty relation for three spin-half operators. Uncertainty relation of arbitrary number of observables is also derived. As an example, the uncertainty relation satisfied by the eight Gell-Mann matrices is presented. PMID:27498851
AVATAR -- Automatic variance reduction in Monte Carlo calculations
Van Riper, K.A.; Urbatsch, T.J.; Soran, P.D.
1997-05-01
AVATAR{trademark} (Automatic Variance And Time of Analysis Reduction), accessed through the graphical user interface application, Justine{trademark}, is a superset of MCNP{trademark} that automatically invokes THREEDANT{trademark} for a three-dimensional deterministic adjoint calculation on a mesh independent of the Monte Carlo geometry, calculates weight windows, and runs MCNP. Computational efficiency increases by a factor of 2 to 5 for a three-detector oil well logging tool model. Human efficiency increases dramatically, since AVATAR eliminates the need for deep intuition and hours of tedious handwork.
Critical points of multidimensional random Fourier series: Variance estimates
NASA Astrophysics Data System (ADS)
Nicolaescu, Liviu I.
2016-08-01
We investigate the number of critical points of a Gaussian random smooth function uɛ on the m-torus Tm ≔ ℝm/ℤm approximating the Gaussian white noise as ɛ → 0. Let N(uɛ) denote the number of critical points of uɛ. We prove the existence of constants C, C' such that as ɛ goes to zero, the expectation of the random variable ɛmN(uɛ) converges to C, while its variance is extremely small and behaves like C'ɛm.
Multi-observable Uncertainty Relations in Product Form of Variances
Qin, Hui-Hui; Fei, Shao-Ming; Li-Jost, Xianqing
2016-01-01
We investigate the product form uncertainty relations of variances for n (n ≥ 3) quantum observables. In particular, tight uncertainty relations satisfied by three observables has been derived, which is shown to be better than the ones derived from the strengthened Heisenberg and the generalized Schrödinger uncertainty relations, and some existing uncertainty relation for three spin-half operators. Uncertainty relation of arbitrary number of observables is also derived. As an example, the uncertainty relation satisfied by the eight Gell-Mann matrices is presented. PMID:27498851
Simulation Study Using a New Type of Sample Variance
NASA Technical Reports Server (NTRS)
Howe, D. A.; Lainson, K. J.
1996-01-01
We evaluate with simulated data a new type of sample variance for the characterization of frequency stability. The new statistic (referred to as TOTALVAR and its square root TOTALDEV) is a better predictor of long-term frequency variations than the present sample Allan deviation. The statistical model uses the assumption that a time series of phase or frequency differences is wrapped (periodic) with overall frequency difference removed. We find that the variability at long averaging times is reduced considerably for the five models of power-law noise commonly encountered with frequency standards and oscillators.
Analysis of variance of thematic mapping experiment data.
Rosenfield, G.H.
1981-01-01
As an example of the methodology, data from an experiment using three scales of land-use and land-cover mapping have been analyzed. The binomial proportions of correct interpretations have been analyzed untransformed and transformed by both the arcsine and the logit transformations. A weighted analysis of variance adjustment has been used. There is evidence of a significant difference among the three scales of mapping (1:24 000, 1:100 000 and 1:250 000) using the transformed data. Multiple range tests showed that all three scales are different for the arcsine transformed data. - from Author
Two-dimensional finite-element temperature variance analysis
NASA Technical Reports Server (NTRS)
Heuser, J. S.
1972-01-01
The finite element method is extended to thermal analysis by forming a variance analysis of temperature results so that the sensitivity of predicted temperatures to uncertainties in input variables is determined. The temperature fields within a finite number of elements are described in terms of the temperatures of vertices and the variational principle is used to minimize the integral equation describing thermal potential energy. A computer calculation yields the desired solution matrix of predicted temperatures and provides information about initial thermal parameters and their associated errors. Sample calculations show that all predicted temperatures are most effected by temperature values along fixed boundaries; more accurate specifications of these temperatures reduce errors in thermal calculations.
Large-scale magnetic variances near the South Solar Pole
NASA Technical Reports Server (NTRS)
Jokipii, J. R.; Kota, J.; Smith, E.; Horbury, T.; Giacalone, J.
1995-01-01
We summarize recent Ulysses observations of the variances over large temporal scales in the interplanetary magnetic field components and their increase as Ulysses approached the South Solar Pole. A model of these fluctuations is shown to provide a very good fit to the observed amplitude and temporal variation of the fluctuations. In addition, the model predicts that the transport of cosmic rays in the heliosphere will be significantly altered by this level of fluctuations. In addition to altering the inward diffusion and drift access of cosmic rays over the solar poles, we find that the magnetic fluctuations also imply a large latitudinal diffusion, caused primarily by the associated field-line random walk.
Variance reduction in Monte Carlo analysis of rarefied gas diffusion.
NASA Technical Reports Server (NTRS)
Perlmutter, M.
1972-01-01
The problem of rarefied diffusion between parallel walls is solved using the Monte Carlo method. The diffusing molecules are evaporated or emitted from one of the two parallel walls and diffuse through another molecular species. The Monte Carlo analysis treats the diffusing molecule as undergoing a Markov random walk, and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs, the expected Markov walk payoff is retained but its variance is reduced so that the Monte Carlo result has a much smaller error.
Variance reduction in Monte Carlo analysis of rarefied gas diffusion
NASA Technical Reports Server (NTRS)
Perlmutter, M.
1972-01-01
The present analysis uses the Monte Carlo method to solve the problem of rarefied diffusion between parallel walls. The diffusing molecules are evaporated or emitted from one of two parallel walls and diffused through another molecular species. The analysis treats the diffusing molecule as undergoing a Markov random walk and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs the expected Markov walk payoff is retained but its variance is reduced so that the M. C. result has a much smaller error.
Brodsky, Stanley J.; de Teramond, Guy F.; /Costa Rica U. /SLAC
2007-02-21
The AdS/CFT correspondence between string theory in AdS space and conformal .eld theories in physical spacetime leads to an analytic, semi-classical model for strongly-coupled QCD which has scale invariance and dimensional counting at short distances and color confinement at large distances. Although QCD is not conformally invariant, one can nevertheless use the mathematical representation of the conformal group in five-dimensional anti-de Sitter space to construct a first approximation to the theory. The AdS/CFT correspondence also provides insights into the inherently non-perturbative aspects of QCD, such as the orbital and radial spectra of hadrons and the form of hadronic wavefunctions. In particular, we show that there is an exact correspondence between the fifth-dimensional coordinate of AdS space z and a specific impact variable {zeta} which measures the separation of the quark and gluonic constituents within the hadron in ordinary space-time. This connection allows one to compute the analytic form of the frame-independent light-front wavefunctions, the fundamental entities which encode hadron properties and allow the computation of decay constants, form factors, and other exclusive scattering amplitudes. New relativistic lightfront equations in ordinary space-time are found which reproduce the results obtained using the 5-dimensional theory. The effective light-front equations possess remarkable algebraic structures and integrability properties. Since they are complete and orthonormal, the AdS/CFT model wavefunctions can also be used as a basis for the diagonalization of the full light-front QCD Hamiltonian, thus systematically improving the AdS/CFT approximation.
Uniqueness of the equation for quantum state vector collapse.
Bassi, Angelo; Dürr, Detlef; Hinrichs, Günter
2013-11-22
The linearity of quantum mechanics leads, under the assumption that the wave function offers a complete description of reality, to grotesque situations famously known as Schrödinger's cat. Ways out are either adding elements of reality or replacing the linear evolution by a nonlinear one. Models of spontaneous wave function collapses took the latter path. The way such models are constructed leaves the question of whether such models are in some sense unique, i.e., whether the nonlinear equations replacing Schrödinger's equation are uniquely determined as collapse equations. Various people worked on identifying the class of nonlinear modifications of the Schrödinger equation, compatible with general physical requirements. Here we identify the most general class of continuous wave function evolutions under the assumption of no-faster-than-light signaling.
Recognition by variance: learning rules for spatiotemporal patterns.
Barak, Omri; Tsodyks, Misha
2006-10-01
Recognizing specific spatiotemporal patterns of activity, which take place at timescales much larger than the synaptic transmission and membrane time constants, is a demand from the nervous system exemplified, for instance, by auditory processing. We consider the total synaptic input that a single readout neuron receives on presentation of spatiotemporal spiking input patterns. Relying on the monotonic relation between the mean and the variance of a neuron's input current and its spiking output, we derive learning rules that increase the variance of the input current evoked by learned patterns relative to that obtained from random background patterns. We demonstrate that the model can successfully recognize a large number of patterns and exhibits a slow deterioration in performance with increasing number of learned patterns. In addition, robustness to time warping of the input patterns is revealed to be an emergent property of the model. Using a leaky integrate-and-fire realization of the readout neuron, we demonstrate that the above results also apply when considering spiking output. PMID:16907629
Stochastic Mixing Model with Power Law Decay of Variance
NASA Technical Reports Server (NTRS)
Fedotov, S.; Ihme, M.; Pitsch, H.
2003-01-01
Here we present a simple stochastic mixing model based on the law of large numbers (LLN). The reason why the LLN is involved in our formulation of the mixing problem is that the random conserved scalar c = c(t,x(t)) appears to behave as a sample mean. It converges to the mean value mu, while the variance sigma(sup 2)(sub c) (t) decays approximately as t(exp -1). Since the variance of the scalar decays faster than a sample mean (typically is greater than unity), we will introduce some non-linear modifications into the corresponding pdf-equation. The main idea is to develop a robust model which is independent from restrictive assumptions about the shape of the pdf. The remainder of this paper is organized as follows. In Section 2 we derive the integral equation from a stochastic difference equation describing the evolution of the pdf of a passive scalar in time. The stochastic difference equation introduces an exchange rate gamma(sub n) which we model in a first step as a deterministic function. In a second step, we generalize gamma(sub n) as a stochastic variable taking fluctuations in the inhomogeneous environment into account. In Section 3 we solve the non-linear integral equation numerically and analyze the influence of the different parameters on the decay rate. The paper finishes with a conclusion.
Minimum variance brain source localization for short data sequences.
Ravan, Maryam; Reilly, James P; Hasey, Gary
2014-02-01
In the electroencephalogram (EEG) or magnetoencephalogram (MEG) context, brain source localization methods that rely on estimating second-order statistics often fail when the number of samples of the recorded data sequences is small in comparison to the number of electrodes. This condition is particularly relevant when measuring evoked potentials. Due to the correlated background EEG/MEG signal, an adaptive approach to localization is desirable. Previous work has addressed these issues by reducing the adaptive degrees of freedom (DoFs). This reduction results in decreased resolution and accuracy of the estimated source configuration. This paper develops and tests a new multistage adaptive processing technique based on the minimum variance beamformer for brain source localization that has been previously used in the radar statistical signal processing context. This processing, referred to as the fast fully adaptive (FFA) approach, can significantly reduce the required sample support, while still preserving all available DoFs. To demonstrate the performance of the FFA approach in the limited data scenario, simulation and experimental results are compared with two previous beamforming approaches; i.e., the fully adaptive minimum variance beamforming method and the beamspace beamforming method. Both simulation and experimental results demonstrate that the FFA method can localize all types of brain activity more accurately than the other approaches with limited data.
Irreversible Langevin samplers and variance reduction: a large deviations approach
NASA Astrophysics Data System (ADS)
Rey-Bellet, Luc; Spiliopoulos, Konstantinos
2015-07-01
In order to sample from a given target distribution (often of Gibbs type), the Monte Carlo Markov chain method consists of constructing an ergodic Markov process whose invariant measure is the target distribution. By sampling the Markov process one can then compute, approximately, expectations of observables with respect to the target distribution. Often the Markov processes used in practice are time-reversible (i.e. they satisfy detailed balance), but our main goal here is to assess and quantify how the addition of a non-reversible part to the process can be used to improve the sampling properties. We focus on the diffusion setting (overdamped Langevin equations) where the drift consists of a gradient vector field as well as another drift which breaks the reversibility of the process but is chosen to preserve the Gibbs measure. In this paper we use the large deviation rate function for the empirical measure as a tool to analyze the speed of convergence to the invariant measure. We show that the addition of an irreversible drift leads to a larger rate function and it strictly improves the speed of convergence of ergodic average for (generic smooth) observables. We also deduce from this result that the asymptotic variance decreases under the addition of the irreversible drift and we give an explicit characterization of the observables whose variance is not reduced reduced, in terms of a nonlinear Poisson equation. Our theoretical results are illustrated and supplemented by numerical simulations.
Concentration variance decay during magma mixing: a volcanic chronometer.
Perugini, Diego; De Campos, Cristina P; Petrelli, Maurizio; Dingwell, Donald B
2015-01-01
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical "mixing to eruption" time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest. PMID:26387555
Variance in saccadic eye movements reflects stable traits.
Meyhöfer, Inga; Bertsch, Katja; Esser, Moritz; Ettinger, Ulrich
2016-04-01
Saccadic tasks are widely used to study cognitive processes, effects of pharmacological treatments, and mechanisms underlying psychiatric disorders. In genetic studies, it is assumed that saccadic endophenotypes are traits. While internal consistency and temporal stability of saccadic performance is high for most of the measures, the magnitude of underlying trait components has not been estimated, and influences of situational aspects and person by situation interactions have not been investigated. To do so, 68 healthy participants performed prosaccades, antisaccades, and memory-guided saccades on three occasions at weekly intervals at the same time of day. Latent state-trait modeling was applied to estimate the proportions of variance reflecting stable trait components, situational influences, and Person × Situation interaction effects. Mean variables for all saccadic tasks showed high to excellent reliabilities. Intraindividual standard deviations were found to be slightly less reliable. Importantly, an average of 60% of variance of a single measurement was explained by trans-situationally stable person effects, while situation aspects and interactions between person and situation were found to play a negligible role. We conclude that saccadic variables, in standard laboratory settings, represent highly reliable measures that are largely unaffected by situational influences. Extending previous reliability studies, these findings clearly demonstrate the trait-like nature of these measures and support their role as endophenotypes.
Cosmic Variance in the Nanohertz Gravitational Wave Background
NASA Astrophysics Data System (ADS)
Roebber, Elinore; Holder, Gilbert; Holz, Daniel E.; Warren, Michael
2016-03-01
We use large N-body simulations and empirical scaling relations between dark matter halos, galaxies, and supermassive black holes (SMBBHs) to estimate the formation rates of SMBBH binaries and the resulting low-frequency stochastic gravitational wave background (GWB). We find this GWB to be relatively insensitive (≲ 10%) to cosmological parameters, with only slight variation between wmap5 and Planck cosmologies. We find that uncertainty in the astrophysical scaling relations changes the amplitude of the GWB by a factor of ∼2. Current observational limits are already constraining this predicted range of models. We investigate the Poisson variance in the amplitude of the GWB for randomly generated populations of SMBBHs, finding a scatter of order unity per frequency bin below 10 nHz, and increasing to a factor of ∼10 near 100 nHz. This variance is a result of the rarity of the most massive binaries, which dominate the signal, and acts as a fundamental uncertainty on the amplitude of the underlying power law spectrum. This Poisson uncertainty dominates at ≳ 20 nHz, while at lower frequencies the dominant uncertainty is related to our poor understanding of the astrophysical scaling relations, although very low frequencies may be dominated by uncertainties related to the final parsec problem and the processes which drive binaries to the gravitational wave dominated regime. Cosmological effects are negligible at all frequencies.
Concentration variance decay during magma mixing: a volcanic chronometer
Perugini, Diego; De Campos, Cristina P.; Petrelli, Maurizio; Dingwell, Donald B.
2015-01-01
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing – a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical “mixing to eruption” time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest. PMID:26387555
Hydraulic geometry of river cross sections; theory of minimum variance
Williams, Garnett P.
1978-01-01
This study deals with the rates at which mean velocity, mean depth, and water-surface width increase with water discharge at a cross section on an alluvial stream. Such relations often follow power laws, the exponents in which are called hydraulic exponents. The Langbein (1964) minimum-variance theory is examined in regard to its validity and its ability to predict observed hydraulic exponents. The variables used with the theory were velocity, depth, width, bed shear stress, friction factor, slope (energy gradient), and stream power. Slope is often constant, in which case only velocity, depth, width, shear and friction factor need be considered. The theory was tested against a wide range of field data from various geographic areas of the United States. The original theory was intended to produce only the average hydraulic exponents for a group of cross sections in a similar type of geologic or hydraulic environment. The theory does predict these average exponents with a reasonable degree of accuracy. An attempt to forecast the exponents at any selected cross section was moderately successful. Empirical equations are more accurate than the minimum variance, Gauckler-Manning, or Chezy methods. Predictions of the exponent of width are most reliable, the exponent of depth fair, and the exponent of mean velocity poor. (Woodard-USGS)
Argentine Population Genetic Structure: Large Variance in Amerindian Contribution
Seldin, Michael F.; Tian, Chao; Shigeta, Russell; Scherbarth, Hugo R.; Silva, Gabriel; Belmont, John W.; Kittles, Rick; Gamron, Susana; Allevi, Alberto; Palatnik, Simon A.; Alvarellos, Alejandro; Paira, Sergio; Caprarulo, Cesar; Guillerón, Carolina; Catoggio, Luis J.; Prigione, Cristina; Berbotto, Guillermo A.; García, Mercedes A.; Perandones, Carlos E.; Pons-Estel, Bernardo A.; Alarcon-Riquelme, Marta E.
2011-01-01
Argentine population genetic structure was examined using a set of 78 ancestry informative markers (AIMs) to assess the contributions of European, Amerindian, and African ancestry in 94 individuals members of this population. Using the Bayesian clustering algorithm STRUCTURE, the mean European contribution was 78%, the Amerindian contribution was 19.4%, and the African contribution was 2.5%. Similar results were found using weighted least mean square method: European, 80.2%; Amerindian, 18.1%; and African, 1.7%. Consistent with previous studies the current results showed very few individuals (four of 94) with greater than 10% African admixture. Notably, when individual admixture was examined, the Amerindian and European admixture showed a very large variance and individual Amerindian contribution ranged from 1.5 to 84.5% in the 94 individual Argentine subjects. These results indicate that admixture must be considered when clinical epidemiology or case control genetic analyses are studied in this population. Moreover, the current study provides a set of informative SNPs that can be used to ascertain or control for this potentially hidden stratification. In addition, the large variance in admixture proportions in individual Argentine subjects shown by this study suggests that this population is appropriate for future admixture mapping studies. PMID:17177183
Variance of the Quantum Dwell Time for a Nonrelativistic Particle
NASA Technical Reports Server (NTRS)
Hahne, Gerhard
2012-01-01
Munoz, Seidel, and Muga [Phys. Rev. A 79, 012108 (2009)], following an earlier proposal by Pollak and Miller [Phys. Rev. Lett. 53, 115 (1984)] in the context of a theory of a collinear chemical reaction, showed that suitable moments of a two-flux correlation function could be manipulated to yield expressions for the mean quantum dwell time and mean square quantum dwell time for a structureless particle scattering from a time-independent potential energy field between two parallel lines in a two-dimensional spacetime. The present work proposes a generalization to a charged, nonrelativistic particle scattering from a transient, spatially confined electromagnetic vector potential in four-dimensional spacetime. The geometry of the spacetime domain is that of the slab between a pair of parallel planes, in particular those defined by constant values of the third (z) spatial coordinate. The mean Nth power, N = 1, 2, 3, . . ., of the quantum dwell time in the slab is given by an expression involving an N-flux-correlation function. All these means are shown to be nonnegative. The N = 1 formula reduces to an S-matrix result published previously [G. E. Hahne, J. Phys. A 36, 7149 (2003)]; an explicit formula for N = 2, and of the variance of the dwell time in terms of the S-matrix, is worked out. A formula representing an incommensurability principle between variances of the output-minus-input flux of a pair of dynamical variables (such as the particle s time flux and others) is derived.
Discordance of DNA methylation variance between two accessible human tissues.
Jiang, Ruiwei; Jones, Meaghan J; Chen, Edith; Neumann, Sarah M; Fraser, Hunter B; Miller, Gregory E; Kobor, Michael S
2015-01-01
Population epigenetic studies have been seeking to identify differences in DNA methylation between specific exposures, demographic factors, or diseases in accessible tissues, but relatively little is known about how inter-individual variability differs between these tissues. This study presents an analysis of DNA methylation differences between matched peripheral blood mononuclear cells (PMBCs) and buccal epithelial cells (BECs), the two most accessible tissues for population studies, in 998 promoter-located CpG sites. Specifically we compared probe-wise DNA methylation variance, and how this variance related to demographic factors across the two tissues. PBMCs had overall higher DNA methylation than BECs, and the two tissues tended to differ most at genomic regions of low CpG density. Furthermore, although both tissues showed appreciable probe-wise variability, the specific regions and magnitude of variability differed strongly between tissues. Lastly, through exploratory association analysis, we found indication of differential association of BEC and PBMC with demographic variables. The work presented here offers insight into variability of DNA methylation between individuals and across tissues and helps guide decisions on the suitability of buccal epithelial or peripheral mononuclear cells for the biological questions explored by epigenetic studies in human populations.
Implications and applications of the variance-based uncertainty equalities
NASA Astrophysics Data System (ADS)
Yao, Yao; Xiao, Xing; Wang, Xiaoguang; Sun, C. P.
2015-06-01
In quantum mechanics, the variance-based Heisenberg-type uncertainty relations are a series of mathematical inequalities posing the fundamental limits on the achievable accuracy of the state preparations. In contrast, we construct and formulate two quantum uncertainty equalities, which hold for all pairs of incompatible observables and indicate the new uncertainty relations recently introduced by L. Maccone and A. K. Pati [Phys. Rev. Lett. 113, 260401 (2014), 10.1103/PhysRevLett.113.260401]. In fact, we obtain a series of inequalities with hierarchical structure, including the Maccone-Pati's inequalities as a special (weakest) case. Furthermore, we present an explicit interpretation lying behind the derivations and relate these relations to the so-called intelligent states. As an illustration, we investigate the properties of these uncertainty inequalities in the qubit system and a state-independent bound is obtained for the sum of variances. Finally, we apply these inequalities to the spin squeezing scenario and its implication in interferometric sensitivity is also discussed.
Concentration variance decay during magma mixing: a volcanic chronometer
NASA Astrophysics Data System (ADS)
Perugini, D.; De Campos, C. P.; Petrelli, M.; Dingwell, D. B.
2015-12-01
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical "mixing to eruption" time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest.
Concentration variance decay during magma mixing: a volcanic chronometer
NASA Astrophysics Data System (ADS)
Perugini, Diego; de Campos, Cristina P.; Petrelli, Maurizio; Dingwell, Donald B.
2015-09-01
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical “mixing to eruption” time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest.
Euclidean and Noetherian entropies in AdS space
Dutta, Suvankar; Gopakumar, Rajesh
2006-08-15
We examine the Euclidean action approach, as well as that of Wald, to the entropy of black holes in asymptotically AdS spaces. From the point of view of holography these two approaches are somewhat complementary in spirit and it is not obvious why they should give the same answer in the presence of arbitrary higher derivative gravity corrections. For the case of the AdS{sub 5} Schwarzschild black hole, we explicitly study the leading correction to the Bekenstein-Hawking entropy in the presence of a variety of higher derivative corrections studied in the literature, including the Type IIB R{sup 4} term. We find a nontrivial agreement between the two approaches in every case. Finally, we give a general way of understanding the equivalence of these two approaches.
NASA Astrophysics Data System (ADS)
Accomazzi, Alberto; Kurtz, M. J.; Henneken, E. A.; Grant, C. S.; Thompson, D.; Di Milia, G.; Luker, J.; Murray, S. S.
2013-01-01
The NASA Astrophysics Data System (ADS) has been working hard on updating its services and interfaces to better support our community's research needs. ADS Labs is a new interface built on the old tried-and-true ADS Abstract Databases, so all of ADS's content is available through it. In this presentation we highlight the new features that have been developed in ADS Labs over the last year: new recommendations, metrics, a citation tool and enhanced fulltext search. ADS Labs has long been providing article-level recommendations based on keyword similarity, co-readership and co-citation analysis of its corpus. We have now introduced personal recommendations, which provide a list of articles to be considered based on a individual user's readership history. A new metrics interface provides a summary of the basic impact indicators for a list of records. These include the total and normalized number of papers, citations, reads, and downloads. Also included are some of the popular indices such as the h, g and i10 index. The citation helper tool allows one to submit a set of records and obtain a list of top 10 papers which cite and/or are cited by papers in the original list (but which are not in it). The process closely resembles the network approach of establishing "friends of friends" via an analysis of the citation network. The full-text search service now covers more than 2.5 million documents, including all the major astronomy journals, as well as physics journals published by Springer, Elsevier, the American Physical Society, the American Geophysical Union, and all of the arXiv eprints. The full-text search interface interface allows users and librarians to dig deep and find words or phrases in the body of the indexed articles. ADS Labs is available at http://adslabs.org
Most general AdS3 boundary conditions
NASA Astrophysics Data System (ADS)
Grumiller, Daniel; Riegler, Max
2016-10-01
We consider the most general asymptotically anti-de Sitter boundary conditions in three-dimensional Einstein gravity with negative cosmological constant. The metric contains in total twelve independent functions, six of which are interpreted as chemical potentials (or non-normalizable fluctuations) and the other half as canonical boundary charges (or normalizable fluctuations). Their presence modifies the usual Fefferman-Graham expansion. The asymptotic symmetry algebra consists of two sl{(2)}_k current algebras, the levels of which are given by k = ℓ/(4 G N ), where ℓ is the AdS radius and G N the three-dimensional Newton constant.
Unique Challenges Testing SDRs for Space
NASA Technical Reports Server (NTRS)
Johnson, Sandra; Chelmins, David; Downey, Joseph; Nappier, Jennifer
2013-01-01
This paper describes the approach used by the Space Communication and Navigation (SCaN) Testbed team to qualify three Software Defined Radios (SDR) for operation in space and the characterization of the platform to enable upgrades on-orbit. The three SDRs represent a significant portion of the new technologies being studied on board the SCAN Testbed, which is operating on an external truss on the International Space Station (ISS). The SCaN Testbed provides experimenters an opportunity to develop and demonstrate experimental waveforms and applications for communication, networking, and navigation concepts and advance the understanding of developing and operating SDRs in space. Qualifying a Software Defined Radio for the space environment requires additional consideration versus a hardware radio. Tests that incorporate characterization of the platform to provide information necessary for future waveforms, which might exercise extended capabilities of the hardware, are needed. The development life cycle for the radio follows the software development life cycle, where changes can be incorporated at various stages of development and test. It also enables flexibility to be added with minor additional effort. Although this provides tremendous advantages, managing the complexity inherent in a software implementation requires a testing beyond the traditional hardware radio test plan. Due to schedule and resource limitations and parallel development activities, the subsystem testing of the SDRs at the vendor sites was primarily limited to typical fixed transceiver type of testing. NASA's Glenn Research Center (GRC) was responsible for the integration and testing of the SDRs into the SCaN Testbed system and conducting the investigation of the SDR to advance the technology to be accepted by missions. This paper will describe the unique tests that were conducted at both the subsystem and system level, including environmental testing, and present results. For example, test
Unique Challenges Testing SDRs for Space
NASA Technical Reports Server (NTRS)
Chelmins, David; Downey, Joseph A.; Johnson, Sandra K.; Nappier, Jennifer M.
2013-01-01
This paper describes the approach used by the Space Communication and Navigation (SCaN) Testbed team to qualify three Software Defined Radios (SDR) for operation in space and the characterization of the platform to enable upgrades on-orbit. The three SDRs represent a significant portion of the new technologies being studied on board the SCAN Testbed, which is operating on an external truss on the International Space Station (ISS). The SCaN Testbed provides experimenters an opportunity to develop and demonstrate experimental waveforms and applications for communication, networking, and navigation concepts and advance the understanding of developing and operating SDRs in space. Qualifying a Software Defined Radio for the space environment requires additional consideration versus a hardware radio. Tests that incorporate characterization of the platform to provide information necessary for future waveforms, which might exercise extended capabilities of the hardware, are needed. The development life cycle for the radio follows the software development life cycle, where changes can be incorporated at various stages of development and test. It also enables flexibility to be added with minor additional effort. Although this provides tremendous advantages, managing the complexity inherent in a software implementation requires a testing beyond the traditional hardware radio test plan. Due to schedule and resource limitations and parallel development activities, the subsystem testing of the SDRs at the vendor sites was primarily limited to typical fixed transceiver type of testing. NASA s Glenn Research Center (GRC) was responsible for the integration and testing of the SDRs into the SCaN Testbed system and conducting the investigation of the SDR to advance the technology to be accepted by missions. This paper will describe the unique tests that were conducted at both the subsystem and system level, including environmental testing, and present results. For example, test
NASA Astrophysics Data System (ADS)
Accomazzi, Alberto; Kurtz, M. J.; Henneken, E. A.; Grant, C. S.; Thompson, D.; Luker, J.; Chyla, R.; Murray, S. S.
2014-01-01
In the spring of 1993, the Smithsonian/NASA Astrophysics Data System (ADS) first launched its bibliographic search system. It was known then as the ADS Abstract Service, a component of the larger Astrophysics Data System effort which had developed an interoperable data system now seen as a precursor of the Virtual Observatory. As a result of the massive technological and sociological changes in the field of scholarly communication, the ADS is now completing the most ambitious technological upgrade in its twenty-year history. Code-named ADS 2.0, the new system features: an IT platform built on web and digital library standards; a new, extensible, industrial strength search engine; a public API with various access control capabilities; a set of applications supporting search, export, visualization, analysis; a collaborative, open source development model; and enhanced indexing of content which includes the full-text of astronomy and physics publications. The changes in the ADS platform affect all aspects of the system and its operations, including: the process through which data and metadata are harvested, curated and indexed; the interface and paradigm used for searching the database; and the follow-up analysis capabilities available to the users. This poster describes the choices behind the technical overhaul of the system, the technology stack used, and the opportunities which the upgrade is providing us with, namely gains in productivity and enhancements in our system capabilities.
Robust Techniques for Testing Heterogeneity of Variance Effects in Factorial Designs.
ERIC Educational Resources Information Center
O'Brien, Ralph G.
1978-01-01
Several ways of using traditional analysis of variance to test the homogeneity of variance in factorial designs with equal or unequal cell sizes are compared using theoretical and Monte Carlo results. (Author/JKS)
Matrix Differencing as a Concise Expression of Test Variance: A Computer Implementation.
ERIC Educational Resources Information Center
Krus, David J.; Wilkinson, Sue Marie
1986-01-01
Matrix differencing of data vectors is introduced as a method for computing test variance and is compared to traditional analysis of variance. Applications for computer assisted instruction, provided by supplemental computer software, are also described. (Author/GDC)
40 CFR 142.301 - What is a small system variance?
Code of Federal Regulations, 2010 CFR
2010-07-01
....301 Section 142.301 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System... issuance of variances from the requirement to comply with a maximum contaminant level or...
Conserved charges in timelike warped AdS3 spaces
NASA Astrophysics Data System (ADS)
Donnay, L.; Fernández-Melgarejo, J. J.; Giribet, G.; Goya, A.; Lavia, E.
2015-06-01
We consider the timelike version of warped anti-de Sitter space (WAdS), which corresponds to the three-dimensional section of the Gödel solution of four-dimensional cosmological Einstein equations. This geometry presents closed timelike curves (CTCs), which are inherited from its four-dimensional embedding. In three dimensions, this type of solution can be supported without matter provided the graviton acquires mass. Here, among the different ways to consistently give mass to the graviton in three dimensions, we consider the parity-even model known as new massive gravity (NMG). In the bulk of timelike WAdS3 space, we introduce defects that, from the three-dimensional point of view, represent spinning massive particlelike objects. For this type of source, we investigate the definition of quasilocal gravitational energy as seen from infinity, far beyond the region where the CTCs appear. We also consider the covariant formalism applied to NMG to compute the mass and the angular momentum of spinning particlelike defects and compare the result with the one obtained by means of the quasilocal stress tensor. We apply these methods to special limits in which the WAdS3 solutions coincide with locally AdS3 and locally AdS2×R spaces. Finally, we make some comments about the asymptotic symmetry algebra of asymptotically WAdS3 spaces in NMG.
Understanding the influence of watershed storage caused by human interferences on ET variance
NASA Astrophysics Data System (ADS)
Zeng, R.; Cai, X.
2014-12-01
Understanding the temporal variance of evapotranspiration (ET) at the watershed scale remains a challenging task, because it is affected by complex climate conditions, soil properties, vegetation, groundwater and human activities. In a changing environment with extensive and intensive human interferences, understanding ET variance and its factors is important for sustainable water resources management. This study presents an analysis of the effect of storage change caused by human activities on ET variance Irrigation usually filters ET variance through the use of surface and groundwater; however, over-amount irrigation may cause the depletion of watershed storage, which changes the coincidence of water availability and energy supply for ET. This study develops a framework by incorporating the water balance and the Budyko Hypothesis. It decomposes the ET variance to the variances of precipitation, potential ET, catchment storage change, and their covariances. The contributions of ET variance from the various components are scaled by some weighting functions, expressed as long-term climate conditions and catchment properties. ET variance is assessed by records from 32 major river basins across the world. It is found that ET variance is dominated by precipitation variance under hot-dry condition and by evaporative demand variance under cool-wet condition; while the coincidence of water and energy supply controls ET variance under moderate climate condition. Watershed storage change plays an increasing important role in determining ET variance with relatively shorter time scale. By incorporating storage change caused by human interferences, this framework corrects the over-estimation of ET variance in hot-dry climate and under-estimation of ET variance in cool-wet climate. Furthermore, classification of dominant factors on ET variance shows similar patterns as geographic zonation.
Greig, Jenny A; Buckley, Suzanne Mk; Waddington, Simon N; Parker, Alan L; Bhella, David; Pink, Rebecca; Rahim, Ahad A; Morita, Takashi; Nicklin, Stuart A; McVey, John H; Baker, Andrew H
2009-10-01
The binding of coagulation factor X (FX) to the hexon of adenovirus (Ad) 5 is pivotal for hepatocyte transduction. However, vectors based on Ad35, a subspecies B Ad, are in development for cancer gene therapy, as Ad35 utilizes CD46 (which is upregulated in many cancers) for transduction. We investigated whether interaction of Ad35 with FX influenced vector tropism using Ad5, Ad35, and Ad5/Ad35 chimeras: Ad5/fiber(f)35, Ad5/penton(p)35/f35, and Ad35/f5. Surface plasmon resonance (SPR) revealed that Ad35 and Ad35/f5 bound FX with approximately tenfold lower affinities than Ad5 hexon-containing viruses, and electron cryomicroscopy (cryo-EM) demonstrated a direct Ad35 hexon:FX interaction. The presence of physiological levels of FX significantly inhibited transduction of vectors containing Ad35 fibers (Ad5/f35, Ad5/p35/f35, and Ad35) in CD46-positive cells. Vectors were intravenously administered to CD46 transgenic mice in the presence and absence of FX-binding protein (X-bp), resulting in reduced liver accumulation for all vectors. Moreover, Ad5/f35 and Ad5/p35/f35 efficiently accumulated in the lung, whereas Ad5 demonstrated poor lung targeting. Additionally, X-bp significantly reduced lung genome accumulation for Ad5/f35 and Ad5/p35/f35, whereas Ad35 was significantly enhanced. In summary, vectors based on the full Ad35 serotype will be useful vectors for selective gene transfer via CD46 due to a weaker FX interaction compared to Ad5.
Analysis of variance of an underdetermined geodetic displacement problem
Darby, D.
1982-06-01
It has been suggested recently that point displacements in a free geodetic network traversing a strike-slip fault may be estimated from repeated surveys by minimizing only those displacement components normal to the strike. It is desirable to justify this procedure. We construct, from estimable quantities, a deformation parameter which is an F-statistic of the type occurring in the analysis of variance of linear models not of full rank. A test of its significance provides the criterion to justify the displacement solution. It is also interesting to study its behaviour as one varies the supposed strike of the fault. Justification of a displacement solution using data from a strike-slip fault is found, but not for data from a rift valley. The technique can be generalized to more complex patterns of deformation such as those expected near the end-zone of a fault in a dislocation model.
A comparison of variance reduction techniques for radar simulation
NASA Astrophysics Data System (ADS)
Divito, A.; Galati, G.; Iovino, D.
Importance sampling and extreme value technique (EVT) and its generalization (G-EVT) were compared as to reduction of the variance of radar simulation estimates. Importance sampling has a greater potential for including a priori information in the simulation experiment, and subsequently to reduce the estimation errors. This feature is paid for by a lack of generality of the simulation procedure. The EVT technique is only valid when a probability tail should be estimated (false alarm problems) and requires, as the only a priori information, that the considered variate belongs to the exponential class. The G-EVT introducing a shape parameter to be estimated (when unknown), allows smaller estimation error to be attained than EVT. The G-EVT and, to a greater extent, the EVT, lead to a straightforward and general simulation procedure for probability tails estimations.
Variance of indoor radon concentration: Major influencing factors.
Yarmoshenko, I; Vasilyev, A; Malinovsky, G; Bossew, P; Žunić, Z S; Onischenko, A; Zhukovsky, M
2016-01-15
Variance of radon concentration in dwelling atmosphere is analysed with regard to geogenic and anthropogenic influencing factors. Analysis includes review of 81 national and regional indoor radon surveys with varying sampling pattern, sample size and duration of measurements and detailed consideration of two regional surveys (Sverdlovsk oblast, Russia and Niška Banja, Serbia). The analysis of the geometric standard deviation revealed that main factors influencing the dispersion of indoor radon concentration over the territory are as follows: area of territory, sample size, characteristics of measurements technique, the radon geogenic potential, building construction characteristics and living habits. As shown for Sverdlovsk oblast and Niška Banja town the dispersion as quantified by GSD is reduced by restricting to certain levels of control factors. Application of the developed approach to characterization of the world population radon exposure is discussed. PMID:26409145
Linear minimum variance filters applied to carrier tracking
NASA Technical Reports Server (NTRS)
Gustafson, D. E.; Speyer, J. L.
1976-01-01
A new approach is taken to the problem of tracking a fixed amplitude signal with a Brownian-motion phase process. Classically, a first-order phase-lock loop (PLL) is used; here, the problem is treated via estimation of the quadrature signal components. In this space, the state dynamics are linear with white multiplicative noise. Therefore, linear minimum-variance filters, which have a particularly simple mechanization, are suggested. The resulting error dynamics are linear at any signal/noise ratio, unlike the classical PLL. During synchronization, and above threshold, this filter with constant gains degrades by 3 per cent in output rms phase error with respect to the classical loop. However, up to 80 per cent of the maximum possible noise improvement is obtained below threshold, where the classical loop is nonoptimum, as demonstrated by a Monte Carlo analysis. Filter mechanizations are presented for both carrier and baseband operation.
The genetic and environmental variance underlying elementary cognitive tasks.
Petrill, S A; Thompson, L A; Detterman, D K
1995-05-01
Although previous studies have examined the genetic and environmental influences upon general intelligence and specific cognitive abilities in school-age children, few studies have examined elementary cognitive tasks in this population. The current study included 149 MZ and 138 same-sex DZ twin pairs who participated in the Western Reserve Twin Project. Thirty measures from the Cognitive Abilities Test (CAT; Detterman, 1986) were studied. Results indicate that (1) these measures are reliable indicators of general intelligence in children and (2) the structure of genetic and environmental influences varies across measures. These results not only indicate that elementary cognitive tasks display heterogeneous genetic and environmental effects, but also may demonstrate that individual differences in biologically based processes are not necessarily due to genetic variance.
Errors in radial velocity variance from Doppler wind lidar
NASA Astrophysics Data System (ADS)
Wang, H.; Barthelmie, R. J.; Doubrawa, P.; Pryor, S. C.
2016-08-01
A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Using both statistically simulated and observed data, this paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, the systematic error is negligible but the random error exceeds about 10 %.
The use of analysis of variance procedures in biological studies
Williams, B.K.
1987-01-01
The analysis of variance (ANOVA) is widely used in biological studies, yet there remains considerable confusion among researchers about the interpretation of hypotheses being tested. Ambiguities arise when statistical designs are unbalanced, and in particular when not all combinations of design factors are represented in the data. This paper clarifies the relationship among hypothesis testing, statistical modelling and computing procedures in ANOVA for unbalanced data. A simple two-factor fixed effects design is used to illustrate three common parametrizations for ANOVA models, and some associations among these parametrizations are developed. Biologically meaningful hypotheses for main effects and interactions are given in terms of each parametrization, and procedures for testing the hypotheses are described. The standard statistical computing procedures in ANOVA are given along with their corresponding hypotheses. Throughout the development unbalanced designs are assumed and attention is given to problems that arise with missing cells.
Correct use of repeated measures analysis of variance.
Park, Eunsik; Cho, Meehye; Ki, Chang-Seok
2009-02-01
In biomedical research, researchers frequently use statistical procedures such as the t-test, standard analysis of variance (ANOVA), or the repeated measures ANOVA to compare means between the groups of interest. There are frequently some misuses in applying these procedures since the conditions of the experiments or statistical assumptions necessary to apply these procedures are not fully taken into consideration. In this paper, we demonstrate the correct use of repeated measures ANOVA to prevent or minimize ethical or scientific problems due to its misuse. We also describe the appropriate use of multiple comparison tests for follow-up analysis in repeated measures ANOVA. Finally, we demonstrate the use of repeated measures ANOVA by using real data and the statistical software package SPSS (SPSS Inc., USA).
Slim completions offer limited stimulation variances: Part 3
Brunsman, B.J. ); Matson, R. ); Shook, R.A. )
1994-12-01
This is the third in a series of five articles addressing barriers to increased US utilization of slimhole drilling and completion techniques. Previous articles discussed slimhole drilling and cementing. The focus of this article is stimulation, with an emphasis on hydraulic fracturing. This series is based on a study conducted for Gas Research institute (GRI) by an industry team consisting of Maurer Engineering, BJ Services, Baker Oil tools, and Halliburton. Parts 1 and 2 were published in the September and October 1994 issues of Petroleum Engineer International, respectively. Potential cost saving resulting from slimhole drilling and completions of gas wells are often inhibited by the limitations on hydraulic fracturing. Variances from conventional fracturing include excessive friction pressure, fracture fluid degradation due to excessive shear rates, proppant bridging and limited diverting options.
Estimation of measurement variance in the context of environment statistics
NASA Astrophysics Data System (ADS)
Maiti, Pulakesh
2015-02-01
The object of environment statistics is for providing information on the environment, on its most important changes over time, across locations and identifying the main factors that influence them. Ultimately environment statistics would be required to produce higher quality statistical information. For this timely, reliable and comparable data are needed. Lack of proper and uniform definitions, unambiguous classifications pose serious problems to procure qualitative data. These cause measurement errors. We consider the problem of estimating measurement variance so that some measures may be adopted to improve upon the quality of data on environmental goods and services and on value statement in economic terms. The measurement technique considered here is that of employing personal interviewers and the sampling considered here is that of two-stage sampling.
Estimating discharge measurement uncertainty using the interpolated variance estimator
Cohn, T.; Kiang, J.; Mason, R.
2012-01-01
Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.
INTERPRETING MAGNETIC VARIANCE ANISOTROPY MEASUREMENTS IN THE SOLAR WIND
TenBarge, J. M.; Klein, K. G.; Howes, G. G.; Podesta, J. J.
2012-07-10
The magnetic variance anisotropy (A{sub m}) of the solar wind has been used widely as a method to identify the nature of solar wind turbulent fluctuations; however, a thorough discussion of the meaning and interpretation of the A{sub m} has not appeared in the literature. This paper explores the implications and limitations of using the A{sub m} as a method for constraining the solar wind fluctuation mode composition and presents a more informative method for interpreting spacecraft data. The paper also compares predictions of the A{sub m} from linear theory to nonlinear turbulence simulations and solar wind measurements. In both cases, linear theory compares well and suggests that the solar wind for the interval studied is dominantly Alfvenic in the inertial and dissipation ranges to scales of k{rho}{sub i} {approx_equal} 5.
Low variance at large scales of WMAP 9 year data
Gruppuso, A.; Finelli, F.; Rosa, A. De; Mandolesi, N.; Natoli, P.; Paci, F.; Molinari, D. E-mail: natoli@fe.infn.it E-mail: finelli@iasfbo.inaf.it E-mail: derosa@iasfbo.inaf.it
2013-07-01
We use an optimal estimator to study the variance of the WMAP 9 CMB field at low resolution, in both temperature and polarization. Employing realistic Monte Carlo simulation, we find statistically significant deviations from the ΛCDM model in several sky cuts for the temperature field. For the considered masks in this analysis, which cover at least the 54% of the sky, the WMAP 9 CMB sky and ΛCDM are incompatible at ≥ 99.94% C.L. at large angles ( > 5°). We find instead no anomaly in polarization. As a byproduct of our analysis, we present new, optimal estimates of the WMAP 9 CMB angular power spectra from the WMAP 9 year data at low resolution.
Variance estimation for the Federal Waterfowl Harvest Surveys
Geissler, P.H.
1988-01-01
The Federal Waterfowl Harvest Surveys provide estimates of waterfowl harvest by species for flyways and states, harvests of most other migratory game bird species (by waterfowl hunters), crippling losses for ducks, geese, and coots, days hunted, and bag per hunter. The Waterfowl Hunter Questionnaire Survey separately estimates the harvest of ducks and geese using cluster samples of hunters who buy duck stamps at sample post offices. The Waterfowl Parts Collection estimates species, age, and sex ratios from parts solicited from successful hunters who responded to the Waterfowl Hunter Questionnaire Survey in previous years. These ratios are used to partition the duck and goose harvest into species, age, and sex specific harvest estimates. Annual estimates are correlated because successful hunters who respond to the Questionnaire Survey in one year may be asked to contribute to the Parts Collection for the next three years. Bootstrap variance estimates are used because covariances among years are difficult to estimate.
From means and variances to persons and patterns
Grice, James W.
2015-01-01
A novel approach for conceptualizing and analyzing data from psychological studies is presented and discussed. This approach is centered on model building in an effort to explicate the structures and processes believed to generate a set of observations. These models therefore go beyond the variable-based, path models in use today which are limiting with regard to the types of inferences psychologists can draw from their research. In terms of analysis, the newer approach replaces traditional aggregate statistics such as means, variances, and covariances with methods of pattern detection and analysis. While these methods are person-centered and do not require parametric assumptions, they are both demanding and rigorous. They also provide psychologists with the information needed to draw the primary inference they often wish to make from their research; namely, the inference to best explanation. PMID:26257672
Ant Colony Optimization for Markowitz Mean-Variance Portfolio Model
NASA Astrophysics Data System (ADS)
Deng, Guang-Feng; Lin, Woo-Tsong
This work presents Ant Colony Optimization (ACO), which was initially developed to be a meta-heuristic for combinatorial optimization, for solving the cardinality constraints Markowitz mean-variance portfolio model (nonlinear mixed quadratic programming problem). To our knowledge, an efficient algorithmic solution for this problem has not been proposed until now. Using heuristic algorithms in this case is imperative. Numerical solutions are obtained for five analyses of weekly price data for the following indices for the period March, 1992 to September, 1997: Hang Seng 31 in Hong Kong, DAX 100 in Germany, FTSE 100 in UK, S&P 100 in USA and Nikkei 225 in Japan. The test results indicate that the ACO is much more robust and effective than Particle swarm optimization (PSO), especially for low-risk investment portfolios.
Variance of indoor radon concentration: Major influencing factors.
Yarmoshenko, I; Vasilyev, A; Malinovsky, G; Bossew, P; Žunić, Z S; Onischenko, A; Zhukovsky, M
2016-01-15
Variance of radon concentration in dwelling atmosphere is analysed with regard to geogenic and anthropogenic influencing factors. Analysis includes review of 81 national and regional indoor radon surveys with varying sampling pattern, sample size and duration of measurements and detailed consideration of two regional surveys (Sverdlovsk oblast, Russia and Niška Banja, Serbia). The analysis of the geometric standard deviation revealed that main factors influencing the dispersion of indoor radon concentration over the territory are as follows: area of territory, sample size, characteristics of measurements technique, the radon geogenic potential, building construction characteristics and living habits. As shown for Sverdlovsk oblast and Niška Banja town the dispersion as quantified by GSD is reduced by restricting to certain levels of control factors. Application of the developed approach to characterization of the world population radon exposure is discussed.
A method for the microlensed flux variance of QSOs
NASA Astrophysics Data System (ADS)
Goodman, Jeremy; Sun, Ai-Lei
2014-06-01
A fast and practical method is described for calculating the microlensed flux variance of an arbitrary source by uncorrelated stars. The required inputs are the mean convergence and shear due to the smoothed potential of the lensing galaxy, the stellar mass function, and the absolute square of the Fourier transform of the surface brightness in the source plane. The mathematical approach follows previous authors but has been generalized, streamlined, and implemented in publicly available code. Examples of its application are given for Dexter and Agol's inhomogeneous-disc models as well as the usual Gaussian sources. Since the quantity calculated is a second moment of the magnification, it is only logarithmically sensitive to the sizes of very compact sources. However, for the inferred sizes of actual quasi-stellar objects (QSOs), it has some discriminatory power and may lend itself to simple statistical tests. At the very least, it should be useful for testing the convergence of microlensing simulations.
The dynamic Allan variance II: a fast computational algorithm.
Galleani, Lorenzo
2010-01-01
The stability of an atomic clock can change with time due to several factors, such as temperature, humidity, radiations, aging, and sudden breakdowns. The dynamic Allan variance, or DAVAR, is a representation of the time-varying stability of an atomic clock, and it can be used to monitor the clock behavior. Unfortunately, the computational time of the DAVAR grows very quickly with the length of the analyzed time series. In this article, we present a fast algorithm for the computation of the DAVAR, and we also extend it to the case of missing data. Numerical simulations show that the fast algorithm dramatically reduces the computational time. The fast algorithm is useful when the analyzed time series is long, or when many clocks must be monitored, or when the computational power is low, as happens onboard satellites and space probes.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-15
... Variance of License Article 403 and Soliciting Comments, Motions to Intervene and Protests Take notice that... inspection: a. Application Type: Extension of temporary variance of license article 403. b. Project No: 12514... Commission to grant an extension of time to a temporary variance of license Article 403 that was granted...
Analysis of Variance of Migmatite Composition II: Comparison of Two Areas.
Ward, R F; Werner, S L
1964-03-01
To obtain comparison with previous results an analysis of variance was made on measurements of proportion of granite and country rock in a second Colorado migmatite. The distributional parameters (mean and variance) of both regions are similar, but the distributions of variance among the three levels of the nested design differ radically.
40 CFR 142.302 - Who can issue a small system variance?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Who can issue a small system variance? 142.302 Section 142.302 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... General Provisions § 142.302 Who can issue a small system variance? A small system variance under...
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2011-01-01
Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…
Modeling Heterogeneous Variance-Covariance Components in Two-Level Models
ERIC Educational Resources Information Center
Leckie, George; French, Robert; Charlton, Chris; Browne, William
2014-01-01
Applications of multilevel models to continuous outcomes nearly always assume constant residual variance and constant random effects variances and covariances. However, modeling heterogeneity of variance can prove a useful indicator of model misspecification, and in some educational and behavioral studies, it may even be of direct substantive…
Water vapor variance measurements using a Raman lidar
NASA Technical Reports Server (NTRS)
Evans, K.; Melfi, S. H.; Ferrare, R.; Whiteman, D.
1992-01-01
Because of the importance of atmospheric water vapor variance, we have analyzed data from the NASA/Goddard Raman lidar to obtain temporal scales of water vapor mixing ratio as a function of altitude over observation periods extending to 12 hours. The ground-based lidar measures water vapor mixing ration from near the earth's surface to an altitude of 9-10 km. Moisture profiles are acquired once every minute with 75 m vertical resolution. Data at each 75 meter altitude level can be displayed as a function of time from the beginning to the end of an observation period. These time sequences have been spectrally analyzed using a fast Fourier transform technique. An example of such a temporal spectrum obtained between 00:22 and 10:29 UT on December 6, 1991 is shown in the figure. The curve shown on the figure represents the spectral average of data from 11 height levels centered on an altitude of 1 km (1 plus or minus .375 km). The spectra shows a decrease in energy density with frequency which generally follows a -5/3 power law over the spectral interval 3x10 (exp -5) to 4x10 (exp -3) Hz. The flattening of the spectrum for frequencies greater than 6x10 (exp -3) Hz is most likely a measure of instrumental noise. Spectra like that shown in the figure are calculated for other altitudes and show changes in spectral features with height. Spectral analysis versus height have been performed for several observation periods which demonstrate changes in water vapor mixing ratio spectral character from one observation period to the next. The combination of these temporal spectra with independent measurements of winds aloft provide an opportunity to infer spatial scales of moisture variance.
A variance-decomposition approach to investigating multiscale habitat associations
Lawler, J.J.; Edwards, T.C., Jr.
2006-01-01
The recognition of the importance of spatial scale in ecology has led many researchers to take multiscale approaches to studying habitat associations. However, few of the studies that investigate habitat associations at multiple spatial scales have considered the potential effects of cross-scale correlations in measured habitat variables. When cross-scale correlations in such studies are strong, conclusions drawn about the relative strength of habitat associations at different spatial scales may be inaccurate. Here we adapt and demonstrate an analytical technique based on variance decomposition for quantifying the influence of cross-scale correlations on multiscale habitat associations. We used the technique to quantify the variation in nest-site locations of Red-naped Sapsuckers (Sphyrapicus nuchalis) and Northern Flickers (Colaptes auratus) associated with habitat descriptors at three spatial scales. We demonstrate how the method can be used to identify components of variation that are associated only with factors at a single spatial scale as well as shared components of variation that represent cross-scale correlations. Despite the fact that no explanatory variables in our models were highly correlated (r < 0.60), we found that shared components of variation reflecting cross-scale correlations accounted for roughly half of the deviance explained by the models. These results highlight the importance of both conducting habitat analyses at multiple spatial scales and of quantifying the effects of cross-scale correlations in such analyses. Given the limits of conventional analytical techniques, we recommend alternative methods, such as the variance-decomposition technique demonstrated here, for analyzing habitat associations at multiple spatial scales. ?? The Cooper Ornithological Society 2006.
Hodological Resonance, Hodological Variance, Psychosis, and Schizophrenia: A Hypothetical Model
Birkett, Paul Brian Lawrie
2011-01-01
Schizophrenia is a disorder with a large number of clinical, neurobiological, and cognitive manifestations, none of which is invariably present. However it appears to be a single nosological entity. This article considers the likely characteristics of a pathology capable of such diverse consequences. It is argued that both deficit and psychotic symptoms can be manifestations of a single pathology. A general model of psychosis is proposed in which the informational sensitivity or responsivity of a network (“hodological resonance”) becomes so high that it activates spontaneously, to produce a hallucination, if it is in sensory cortex, or another psychotic symptom if it is elsewhere. It is argued that this can come about because of high levels of modulation such as those assumed present in affective psychosis, or because of high levels of baseline resonance, such as those expected in deafferentation syndromes associated with hallucinations, for example, Charles Bonnet. It is further proposed that schizophrenia results from a process (probably neurodevelopmental) causing widespread increases of variance in baseline resonance; consequently some networks possess high baseline resonance and become susceptible to spontaneous activation. Deficit symptoms might result from the presence of networks with increased activation thresholds. This hodological variance model is explored in terms of schizo-affective disorder, transient psychotic symptoms, diathesis-stress models, mechanisms of antipsychotic pharmacotherapy and persistence of genes predisposing to schizophrenia. Predictions and implications of the model are discussed. In particular it suggests a need for more research into psychotic states and for more single case-based studies in schizophrenia. PMID:21811475
The Mobius Effect: Addressing Learner Variance in Schools
ERIC Educational Resources Information Center
Tomlinson, Carol Ann
2004-01-01
Currently, educators separate out from typical students those whose learning needs vary from the norm. The norming and sorting process may earmark students as "different" without providing markedly unique instruction and without producing robust academic outcomes. An alternative to fragmentation for some students is the creation of classrooms in…
Constructing Dense Graphs with Unique Hamiltonian Cycles
ERIC Educational Resources Information Center
Lynch, Mark A. M.
2012-01-01
It is not difficult to construct dense graphs containing Hamiltonian cycles, but it is difficult to generate dense graphs that are guaranteed to contain a unique Hamiltonian cycle. This article presents an algorithm for generating arbitrarily large simple graphs containing "unique" Hamiltonian cycles. These graphs can be turned into dense graphs…
Added Value in Electronic Publications.
ERIC Educational Resources Information Center
Bothma, Theo J. D.
Electronic publications are flooding the market. Some of these publications are created specifically for the electronic environment, but many are conversions of existing material to electronic format. It is not worth the time and effort merely to publish existing material in electronic format if no value is added in the conversion process. The…
Thermodynamics of charged Lovelock: AdS black holes
NASA Astrophysics Data System (ADS)
Prasobh, C. B.; Suresh, Jishnu; Kuriakose, V. C.
2016-04-01
We investigate the thermodynamic behavior of maximally symmetric charged, asymptotically AdS black hole solutions of Lovelock gravity. We explore the thermodynamic stability of such solutions by the ordinary method of calculating the specific heat of the black holes and investigating its divergences which signal second-order phase transitions between black hole states. We then utilize the methods of thermodynamic geometry of black hole spacetimes in order to explain the origin of these points of divergence. We calculate the curvature scalar corresponding to a Legendre-invariant thermodynamic metric of these spacetimes and find that the divergences in the black hole specific heat correspond to singularities in the thermodynamic phase space. We also calculate the area spectrum for large black holes in the model by applying the Bohr-Sommerfeld quantization to the adiabatic invariant calculated for the spacetime.
An investigation of AdS2 backreaction and holography
NASA Astrophysics Data System (ADS)
Engelsöy, Julius; Mertens, Thomas G.; Verlinde, Herman
2016-07-01
We investigate a dilaton gravity model in AdS2 proposed by Almheiri and Polchinski [1] and develop a 1d effective description in terms of a dynamical boundary time with a Schwarzian derivative action. We show that the effective model is equivalent to a 1d version of Liouville theory, and investigate its dynamics and symmetries via a standard canonical framework. We include the coupling to arbitrary conformal matter and analyze the effective action in the presence of possible sources. We compute commutators of local operators at large time separation, and match the result with the time shift due to a gravitational shockwave interaction. We study a black hole evaporation process and comment on the role of entropy in this model.
Superconformal algebras on the boundary of AdS3
NASA Astrophysics Data System (ADS)
Rasmussen, Jørgen
1999-07-01
Motivated by recent progress on the correspondence between string theory on nti-de Sitter space and conformal field theory, we provide an explicit construction of an infinite dimensional class of superconformal algebras on the boundary of AdS3. These space-time algebras are N extended superconformal algebras of the kind obtainable by hamiltonian reduction of affine SL(2|N/2) current superalgebras for N even, and are induced by the same current superalgebras residing on the world sheet. Thus, such an extended superconformal algebra is generated by N supercurrents and an SL(N/2) current algebra in addition to a U(1) current algebra. The results are obtained within the framework of free field realizations.
Systematics of Coupling Flows in AdS Backgrounds
Goldberger, Walter D.; Rothstein, Ira Z.
2003-03-18
We give an effective field theory derivation, based on the running of Planck brane gauge correlators, of the large logarithms that arise in the predictions for low energy gauge couplings in compactified AdS}_5 backgrounds, including the one-loop effects of bulk scalars, fermions, and gauge bosons. In contrast to the case of charged scalars coupled to Abelian gauge fields that has been considered previously in the literature, the one-loop corrections are not dominated by a single 4D Kaluza-Klein mode. Nevertheless, in the case of gauge field loops, the amplitudes can be reorganized into a leading logarithmic contribution that is identical to the running in 4D non-Abelian gauge theory, and a term which is not logarithmically enhanced and is analogous to a two-loop effect in 4D. In a warped GUT model broken by the Higgs mechanism in the bulk,we show that the matching scale that appears in the large logarithms induced by the non-Abelian gauge fields is m_{XY}^2/k where m_{XY} is the bulk mass of the XY bosons and k is the AdS curvature. This is in contrast to the UV scale in the logarithmic contributions of scalars, which is simply the bulk mass m. Our results are summarized in a set of simple rules that can be applied to compute the leading logarithmic predictions for coupling constant relations within a given warped GUT model. We present results for both bulk Higgs and boundary breaking of the GUT gauge
Holography beyond conformal invariance and AdS isometry?
Barvinsky, A. O.
2015-03-15
We suggest that the principle of holographic duality be extended beyond conformal invariance and AdS isometry. Such an extension is based on a special relation between functional determinants of the operators acting in the bulk and on its boundary, provided that the boundary operator represents the inverse propagators of the theory induced on the boundary by the Dirichlet boundary value problem in the bulk spacetime. This relation holds for operators of a general spin-tensor structure on generic manifolds with boundaries irrespective of their background geometry and conformal invariance, and it apparently underlies numerous O(N{sup 0}) tests of the AdS/CFT correspondence, based on direct calculation of the bulk and boundary partition functions, Casimir energies, and conformal anomalies. The generalized holographic duality is discussed within the concept of the “double-trace” deformation of the boundary theory, which is responsible in the case of large-N CFT coupled to the tower of higher-spin gauge fields for the renormalization group flow between infrared and ultraviolet fixed points. Potential extension of this method beyond the one-loop order is also briefly discussed.
NASA Astrophysics Data System (ADS)
Asanuma, Jun
Variances of the velocity components and scalars are important as indicators of the turbulence intensity. They also can be utilized to estimate surface fluxes in several types of "variance methods", and the estimated fluxes can be regional values if the variances from which they are calculated are regionally representative measurements. On these motivations, variances measured by an aircraft in the unstable ABL over a flat pine forest during HAPEX-Mobilhy were analyzed within the context of the similarity scaling arguments. The variances of temperature and vertical velocity within the atmospheric surface layer were found to follow closely the Monin-Obukhov similarity theory, and to yield reasonable estimates of the surface sensible heat fluxes when they are used in variance methods. This gives a validation to the variance methods with aircraft measurements. On the other hand, the specific humidity variances were influenced by the surface heterogeneity and clearly fail to obey MOS. A simple analysis based on the similarity law for free convection produced a comprehensible and quantitative picture regarding the effect of the surface flux heterogeneity on the statistical moments, and revealed that variances of the active and passive scalars become dissimilar because of their different roles in turbulence. The analysis also indicated that the mean quantities are also affected by the heterogeneity but to a less extent than the variances. The temperature variances in the mixed layer (ML) were examined by using a generalized top-down bottom-up diffusion model with some combinations of velocity scales and inversion flux models. The results showed that the surface shear stress exerts considerable influence on the lower ML. Also with the temperature and vertical velocity variances ML variance methods were tested, and their feasibility was investigated. Finally, the variances in the ML were analyzed in terms of the local similarity concept; the results confirmed the original
Auto-configuration protocols in mobile ad hoc networks.
Villalba, Luis Javier García; Matesanz, Julián García; Orozco, Ana Lucila Sandoval; Díaz, José Duván Márquez
2011-01-01
The TCP/IP protocol allows the different nodes in a network to communicate by associating a different IP address to each node. In wired or wireless networks with infrastructure, we have a server or node acting as such which correctly assigns IP addresses, but in mobile ad hoc networks there is no such centralized entity capable of carrying out this function. Therefore, a protocol is needed to perform the network configuration automatically and in a dynamic way, which will use all nodes in the network (or part thereof) as if they were servers that manage IP addresses. This article reviews the major proposed auto-configuration protocols for mobile ad hoc networks, with particular emphasis on one of the most recent: D2HCP. This work also includes a comparison of auto-configuration protocols for mobile ad hoc networks by specifying the most relevant metrics, such as a guarantee of uniqueness, overhead, latency, dependency on the routing protocol and uniformity.
Bush, B.; Jenkin, T.; Lipowicz, D.; Arent, D. J.; Cooke, R.
2012-01-01
Does large scale penetration of renewable generation such as wind and solar power pose economic and operational burdens on the electricity system? A number of studies have pointed to the potential benefits of renewable generation as a hedge against the volatility and potential escalation of fossil fuel prices. Research also suggests that the lack of correlation of renewable energy costs with fossil fuel prices means that adding large amounts of wind or solar generation may also reduce the volatility of system-wide electricity costs. Such variance reduction of system costs may be of significant value to consumers due to risk aversion. The analysis in this report recognizes that the potential value of risk mitigation associated with wind generation and natural gas generation may depend on whether one considers the consumer's perspective or the investor's perspective and whether the market is regulated or deregulated. We analyze the risk and return trade-offs for wind and natural gas generation for deregulated markets based on hourly prices and load over a 10-year period using historical data in the PJM Interconnection (PJM) from 1999 to 2008. Similar analysis is then simulated and evaluated for regulated markets under certain assumptions.
Markowitz, Orit; Schwartz, Michelle; Minhas, Sumeet; Siegel, Daniel M
2016-01-01
Non-invasive imaging devices are currently being utilized in research and clinical settings to help visualize, characterize, anddiagnose cancers of the skin. Speckle-variance optical coherence tomography (svOCT) is one such technology that offers considerable promise for non-invasive, real time detection of skin cancers given its added ability to show changes in microvasculature. We present four early lesions of the face namely sebaceous hyperplasia, basal cell skin cancer, pigmented actinic keratosis, and malignant melanoma in situ that each display different important identification markers on svOCT. Up until now, svOCT has mainly been evaluated for lesion diagnosis using transversal (vertical) sections. Our preliminary svOCT findings use dynamic en face (horizontal) visualization to differentiate lesions based on their specific vascular organizations. These observed patterns further elucidate the potential of this imaging device to become a powerful tool in patient disease assessment. PMID:27617454
myADS-arXiv -- a Tailor-made, Open Access, Virtual Journal
NASA Astrophysics Data System (ADS)
Henneken, E.; Kurtz, M. J.; Eichhorn, G.; Accomazzi, A.; Grant, C. S.; Thompson, D.; Bohlen, E.; Murray, S. S.
2007-10-01
The myADS-arXiv service provides the scientific community with a one stop shop for staying up-to-date with a researcher's field of interest. The service provides a powerful and unique filter on the enormous amount of bibliographic information added to the ADS on a daily basis. It also provides a complete view of the most relevant papers available in the subscriber's field of interest. With this service, the subscriber will get to know the latest developments, popular trends and the most important papers. This makes the service not only unique from a technical point of view, but also from a content point of view. On this poster we will argue why myADS-arXiv is a tailor-made, open access, virtual journal and we will illustrate its unique character.
ADS's Dexter Data Extraction Applet
NASA Astrophysics Data System (ADS)
Demleitner, M.; Accomazzi, A.; Eichhorn, G.; Grant, C. S.; Kurtz, M. J.; Murray, S. S.
The NASA Astrophysics Data System (ADS) now holds 1.3 million scanned pages, containing numerous plots and figures for which the original data sets are lost or inaccessible. The availability of scans of the figures can significantly ease the regeneration of the data sets. For this purpose, the ADS has developed Dexter, a Java applet that supports the user in this process. Dexter's basic functionality is to let the user manually digitize a plot by marking points and defining the coordinate transformation from the logical to the physical coordinate system. Advanced features include automatic identification of axes, tracing lines and finding points matching a template. This contribution both describes the operation of Dexter from a user's point of view and discusses some of the architectural issues we faced during implementation.
Fitzpatrick, A.Liam; Kaplan, Jared; /SLAC
2012-02-14
We show that suitably regulated multi-trace primary states in large N CFTs behave like 'in' and 'out' scattering states in the flat-space limit of AdS. Their transition matrix elements approach the exact scattering amplitudes for the bulk theory, providing a natural CFT definition of the flat space S-Matrix. We study corrections resulting from the AdS curvature and particle propagation far from the center of AdS, and show that AdS simply provides an IR regulator that disappears in the flat space limit.
Falls Prevention: Unique to Older Adults
... Prevention Sleep Problems Stroke Join our e-newsletter! Aging & Health A to Z Falls Prevention Unique to ... difficulties. Optimizing Management of Congestive Heart Failure and COPD Congestive Heart Failure (CHF) Many older people develop ...
Unique Biosignatures in Caves of All Lithologies
NASA Astrophysics Data System (ADS)
Boston, P. J.; Schubert, K. E.; Gomez, E.; Conrad, P. G.
2015-10-01
Unique maze-like microbial communities on cave surfaces on all lithologies all over the world are an excellent candidate biosignatures for life detection missions into caves and other extraterrestrial environments.
Unique Ideas in a New Facility
ERIC Educational Resources Information Center
Hamby, G. W.
1977-01-01
Unique features of a new vocational agriculture department facility in Diamond, Missouri, are described, which include an overhead hoist system, arc welders, storage areas, paint room, and greenhouse. (TA)
Cosmological N-body simulations with suppressed variance
NASA Astrophysics Data System (ADS)
Angulo, Raul E.; Pontzen, Andrew
2016-10-01
We present and test a method that dramatically reduces variance arising from the sparse sampling of wavemodes in cosmological simulations. The method uses two simulations which are fixed (the initial Fourier mode amplitudes are fixed to the ensemble average power spectrum) and paired (with initial modes exactly out of phase). We measure the power spectrum, monopole and quadrupole redshift-space correlation functions, halo mass function and reduced bispectrum at z = 1. By these measures, predictions from a fixed pair can be as precise on non-linear scales as an average over 50 traditional simulations. The fixing procedure introduces a non-Gaussian correction to the initial conditions; we give an analytic argument showing why the simulations are still able to predict the mean properties of the Gaussian ensemble. We anticipate that the method will drive down the computational time requirements for accurate large-scale explorations of galaxy bias and clustering statistics, and facilitating the use of numerical simulations in cosmological data interpretation.
Beyond the GUM: variance-based sensitivity analysis in metrology
NASA Astrophysics Data System (ADS)
Lira, I.
2016-07-01
Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand.
Variance Anisotropy of Solar Wind Velocity and Magnetic Field Fluctuations
NASA Astrophysics Data System (ADS)
Oughton, S.; Matthaeus, W. H.; Wan, M.
2015-12-01
At MHD scales in the solar wind, velocity and magnetic fieldfluctuations are typically observed to have much more energy in thecomponents transverse to the mean magnetic field, relative to theparallel components [eg, 1,2]. This is often referred to asvariance anisotropy. Various explanations for it have been suggested,including that the fluctuations are predominantly shear Alfvén waves[1] and that turbulent dynamics leads to such states [eg, 3].Here we investigate the origin and strength of such varianceanisotropies, using spectral method simulations of thecompressible (polytropic) 3D MHD equations. We report on results from runs with several different classes ofinitial conditions. These classes include(i) fluctuations polarized only in the same sense as shear Alfvénwaves (aka toroidal polarization),(ii) randomly polarized fluctuations, and(iii) fluctuations restricted so that most of the energy is inmodes which have their wavevectors perpendicular, or nearly so, to thebackground magnetic field: quasi-2D modes. The plasma beta and Mach number dependence [4] of quantities like the variance anisotropy, Alfven ratio, and fraction of the energy in the toroidal fluctuations will be examined, along with the timescales for the development of any systematic features.Implications for solar wind fluctuations will be discussed. References:[1] Belcher & Davis 1971, J. Geophys. Res, 76, 3534.[2] Oughton et al 2015, Phil Trans Roy Soc A, 373, 20140152.[3] Matthaeus et al 1996, J. Geophys. Res, 101, 7619.[4] Smith et al 2006, J. Geophys. Res, 111, A09111.
Lung vasculature imaging using speckle variance optical coherence tomography
NASA Astrophysics Data System (ADS)
Cua, Michelle; Lee, Anthony M. D.; Lane, Pierre M.; McWilliams, Annette; Shaipanich, Tawimas; MacAulay, Calum E.; Yang, Victor X. D.; Lam, Stephen
2012-02-01
Architectural changes in and remodeling of the bronchial and pulmonary vasculature are important pathways in diseases such as asthma, chronic obstructive pulmonary disease (COPD), and lung cancer. However, there is a lack of methods that can find and examine small bronchial vasculature in vivo. Structural lung airway imaging using optical coherence tomography (OCT) has previously been shown to be of great utility in examining bronchial lesions during lung cancer screening under the guidance of autofluorescence bronchoscopy. Using a fiber optic endoscopic OCT probe, we acquire OCT images from in vivo human subjects. The side-looking, circumferentially-scanning probe is inserted down the instrument channel of a standard bronchoscope and manually guided to the imaging location. Multiple images are collected with the probe spinning proximally at 100Hz. Due to friction, the distal end of the probe does not spin perfectly synchronous with the proximal end, resulting in non-uniform rotational distortion (NURD) of the images. First, we apply a correction algorithm to remove NURD. We then use a speckle variance algorithm to identify vasculature. The initial data show a vascaulture density in small human airways similar to what would be expected.
Cosmic variance and the measurement of the local Hubble parameter.
Marra, Valerio; Amendola, Luca; Sawicki, Ignacy; Valkenburg, Wessel
2013-06-14
There is an approximately 9% discrepancy, corresponding to 2.4 σ, between two independent constraints on the expansion rate of the Universe: one indirectly arising from the cosmic microwave background and baryon acoustic oscillations and one more directly obtained from local measurements of the relation between redshifts and distances to sources. We argue that by taking into account the local gravitational potential at the position of the observer this tension--strengthened by the recent Planck results--is partially relieved and the concordance of the Standard Model of cosmology increased. We estimate that measurements of the local Hubble constant are subject to a cosmic variance of about 2.4% (limiting the local sample to redshifts z > 0.010) or 1.3% (limiting it to z > 0.023), a more significant correction than that taken into account already. Nonetheless, we show that one would need a very rare fluctuation to fully explain the offset in the Hubble rates. If this tension is further strengthened, a cosmology beyond the Standard Model may prove necessary. PMID:25165911
Variance Based Measure for Optimization of Parametric Realignment Algorithms
Mehring, Carsten
2016-01-01
Neuronal responses to sensory stimuli or neuronal responses related to behaviour are often extracted by averaging neuronal activity over large number of experimental trials. Such trial-averaging is carried out to reduce noise and to diminish the influence of other signals unrelated to the corresponding stimulus or behaviour. However, if the recorded neuronal responses are jittered in time with respect to the corresponding stimulus or behaviour, averaging over trials may distort the estimation of the underlying neuronal response. Temporal jitter between single trial neural responses can be partially or completely removed using realignment algorithms. Here, we present a measure, named difference of time-averaged variance (dTAV), which can be used to evaluate the performance of a realignment algorithm without knowing the internal triggers of neural responses. Using simulated data, we show that using dTAV to optimize the parameter values for an established parametric realignment algorithm improved its efficacy and, therefore, reduced the jitter of neuronal responses. By removing the jitter more effectively and, therefore, enabling more accurate estimation of neuronal responses, dTAV can improve analysis and interpretation of the neural responses. PMID:27159490
Cosmic variance in [O/Fe] in the Galactic disk
NASA Astrophysics Data System (ADS)
Bertran de Lis, S.; Allende Prieto, C.; Majewski, S. R.; Schiavon, R. P.; Holtzman, J. A.; Shetrone, M.; Carrera, R.; García Pérez, A. E.; Mészáros, Sz.; Frinchaboy, P. M.; Hearty, F. R.; Nidever, D. L.; Zasowski, G.; Ge, J.
2016-05-01
We examine the distribution of the [O/Fe] abundance ratio in stars across the Galactic disk using H-band spectra from the Apache Point Galactic Evolution Experiment (APOGEE). We minimize systematic errors by considering groups of stars with similar atmospheric parameters. The APOGEE measurements in the Sloan Digital Sky Survey data release 12 reveal that the square root of the star-to-star cosmic variance in the oxygen-to-iron ratio at a given metallicity is about 0.03-0.04 dex in both the thin and thick disk. This is about twice as high as the spread found for solar twins in the immediate solar neighborhood and the difference is probably associated to the wider range of galactocentric distances spanned by APOGEE stars. We quantify the uncertainties by examining the spread among stars with the same parameters in clusters; these errors are a function of effective temperature and metallicity, ranging between 0.005 dex at 4000 K and solar metallicity, to about 0.03 dex at 4500 K and [Fe/H] ≃ -0.6. We argue that measuring the spread in [O/Fe] and other abundance ratios provides strong constraints for models of Galactic chemical evolution.
Analysis of variance (ANOVA) models in lower extremity wounds.
Reed, James F
2003-06-01
Consider a study in which 2 new treatments are being compared with a control group. One way to compare outcomes would simply be to compare the 2 treatments with the control and the 2 treatments against each using 3 Student t tests (t test). If we were to compare 4 treatment groups, then we would need to use 6 t tests. The difficulty with using multiple t tests is that as the number of groups increases, so will the likelihood of finding a difference between any pair of groups simply by change when no real difference exists by definition a Type I error. If we were to perform 3 separate t tests each at alpha = .05, the experimental error rate increases to .14. As the number of multiple t tests increases, the experiment-wise error rate increases rather rapidly. The solution to the experimental error rate problem is to use analysis of variance (ANOVA) methods. Three basic ANOVA designs are reviewed that give hypothetical examples drawn from the literature to illustrate single-factor ANOVA, repeated measures ANOVA, and randomized block ANOVA. "No frills" SPSS or SAS code for each of these designs and examples used are available from the author on request.
Analysis of variance in neuroreceptor ligand imaging studies.
Ko, Ji Hyun; Reilhac, Anthonin; Ray, Nicola; Rusjan, Pablo; Bloomfield, Peter; Pellecchia, Giovanna; Houle, Sylvain; Strafella, Antonio P
2011-01-01
Radioligand positron emission tomography (PET) with dual scan paradigms can provide valuable insight into changes in synaptic neurotransmitter concentration due to experimental manipulation. The residual t-test has been utilized to improve the sensitivity of the t-test in PET studies. However, no further development of statistical tests using residuals has been proposed so far to be applied in cases when there are more than two conditions. Here, we propose the residual f-test, a one-way analysis of variance (ANOVA), and examine its feasibility using simulated [(11)C]raclopride PET data. We also re-visit data from our previously published [(11)C]raclopride PET study, in which 10 individuals underwent three PET scans under different conditions. We found that the residual f-test is superior in terms of sensitivity than the conventional f-test while still controlling for type 1 error. The test will therefore allow us to reliably test hypotheses in the smaller sample sizes often used in explorative PET studies.
A model selection approach to analysis of variance and covariance.
Alber, Susan A; Weiss, Robert E
2009-06-15
An alternative to analysis of variance is a model selection approach where every partition of the treatment means into clusters with equal value is treated as a separate model. The null hypothesis that all treatments are equal corresponds to the partition with all means in a single cluster. The alternative hypothesis correspond to the set of all other partitions of treatment means. A model selection approach can also be used for a treatment by covariate interaction, where the null hypothesis and each alternative correspond to a partition of treatments into clusters with equal covariate effects. We extend the partition-as-model approach to simultaneous inference for both treatment main effect and treatment interaction with a continuous covariate with separate partitions for the intercepts and treatment-specific slopes. The model space is the Cartesian product of the intercept partition and the slope partition, and we develop five joint priors for this model space. In four of these priors the intercept and slope partition are dependent. We advise on setting priors over models, and we use the model to analyze an orthodontic data set that compares the frictional resistance created by orthodontic fixtures.
Improving computational efficiency of Monte Carlo simulations with variance reduction
Turner, A.
2013-07-01
CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)
Neutrality and the Response of Rare Species to Environmental Variance
Benedetti-Cecchi, Lisandro; Bertocci, Iacopo; Vaselli, Stefano; Maggi, Elena; Bulleri, Fabio
2008-01-01
Neutral models and differential responses of species to environmental heterogeneity offer complementary explanations of species abundance distribution and dynamics. Under what circumstances one model prevails over the other is still a matter of debate. We show that the decay of similarity over time in rocky seashore assemblages of algae and invertebrates sampled over a period of 16 years was consistent with the predictions of a stochastic model of ecological drift at time scales larger than 2 years, but not at time scales between 3 and 24 months when similarity was quantified with an index that reflected changes in abundance of rare species. A field experiment was performed to examine whether assemblages responded neutrally or non-neutrally to changes in temporal variance of disturbance. The experimental results did not reject neutrality, but identified a positive effect of intermediate levels of environmental heterogeneity on the abundance of rare species. This effect translated into a marked decrease in the characteristic time scale of species turnover, highlighting the role of rare species in driving assemblage dynamics in fluctuating environments. PMID:18648545
ADS Labs: Supporting Information Discovery in Science Education
NASA Astrophysics Data System (ADS)
Henneken, E. A.
2013-04-01
The SAO/NASA Astrophysics Data System (ADS) is an open access digital library portal for researchers in astronomy and physics, operated by the Smithsonian Astrophysical Observatory (SAO) under a NASA grant, successfully serving the professional science community for two decades. Currently there are about 55,000 frequent users (100+ queries per year), and up to 10 million infrequent users per year. Access by the general public now accounts for about half of all ADS use, demonstrating the vast reach of the content in our databases. The visibility and use of content in the ADS can be measured by the fact that there are over 17,000 links from Wikipedia pages to ADS content, a figure comparable to the number of links that Wikipedia has to OCLC's WorldCat catalog. The ADS, through its holdings and innovative techniques available in ADS Labs, offers an environment for information discovery that is unlike any other service currently available to the astrophysics community. Literature discovery and review are important components of science education, aiding the process of preparing for a class, project, or presentation. The ADS has been recognized as a rich source of information for the science education community in astronomy, thanks to its collaborations within the astronomy community, publishers and projects like ComPADRE. One element that makes the ADS uniquely relevant for the science education community is the availability of powerful tools to explore aspects of the astronomy literature as well as the relationship between topics, people, observations and scientific papers. The other element is the extensive repository of scanned literature, a significant fraction of which consists of historical literature.
Modularity, comparative cognition and human uniqueness
Shettleworth, Sara J.
2012-01-01
Darwin's claim ‘that the difference in mind between man and the higher animals … is certainly one of degree and not of kind’ is at the core of the comparative study of cognition. Recent research provides unprecedented support for Darwin's claim as well as new reasons to question it, stimulating new theories of human cognitive uniqueness. This article compares and evaluates approaches to such theories. Some prominent theories propose sweeping domain-general characterizations of the difference in cognitive capabilities and/or mechanisms between adult humans and other animals. Dual-process theories for some cognitive domains propose that adult human cognition shares simple basic processes with that of other animals while additionally including slower-developing and more explicit uniquely human processes. These theories are consistent with a modular account of cognition and the ‘core knowledge’ account of children's cognitive development. A complementary proposal is that human infants have unique social and/or cognitive adaptations for uniquely human learning. A view of human cognitive architecture as a mosaic of unique and species-general modular and domain-general processes together with a focus on uniquely human developmental mechanisms is consistent with modern evolutionary-developmental biology and suggests new questions for comparative research. PMID:22927578
Modularity, comparative cognition and human uniqueness.
Shettleworth, Sara J
2012-10-01
Darwin's claim 'that the difference in mind between man and the higher animals … is certainly one of degree and not of kind' is at the core of the comparative study of cognition. Recent research provides unprecedented support for Darwin's claim as well as new reasons to question it, stimulating new theories of human cognitive uniqueness. This article compares and evaluates approaches to such theories. Some prominent theories propose sweeping domain-general characterizations of the difference in cognitive capabilities and/or mechanisms between adult humans and other animals. Dual-process theories for some cognitive domains propose that adult human cognition shares simple basic processes with that of other animals while additionally including slower-developing and more explicit uniquely human processes. These theories are consistent with a modular account of cognition and the 'core knowledge' account of children's cognitive development. A complementary proposal is that human infants have unique social and/or cognitive adaptations for uniquely human learning. A view of human cognitive architecture as a mosaic of unique and species-general modular and domain-general processes together with a focus on uniquely human developmental mechanisms is consistent with modern evolutionary-developmental biology and suggests new questions for comparative research. PMID:22927578
Right temporopolar activation associated with unique perception.
Asari, Tomoki; Konishi, Seiki; Jimura, Koji; Chikazoe, Junichi; Nakamura, Noriko; Miyashita, Yasushi
2008-05-15
Unique mode of perception, or the ability to see things differently from others, is one of the psychological resources required for creative mental activities. Behavioral studies using ambiguous visual stimuli have successfully induced diverse responses from subjects, and the unique responses defined in this paradigm were observed in higher frequency in the artistic population as compared to the nonartistic population. However, the neural substrates that underlie such unique perception have yet to be investigated. In the present study, ten ambiguous figures were used as stimuli. The subjects were instructed to say what the figures looked like during functional MRI scanning. The responses were classified as "frequent", "infrequent" or "unique" responses based on the appearance frequency of the same response in an independent age- and gender-matched control group. An event-related analysis contrasting unique vs. frequent responses revealed the greatest activation in the right temporal pole, which survived a whole brain multiple comparison. An alternative parametric modulation analysis was also performed to show that potentially confounding perceptual effects deriving from differences in visual stimuli make no significant contribution to this temporopolar activation. Previous neuroimaging and neuropsychological studies have shown the involvement of the temporal pole in perception-emotion linkage. Thus, our results suggest that unique perception is produced by the integration of perceptual and emotional processes, and this integration might underlie essential parts of creative mental activities.
Magnetic mass in 4D AdS gravity
NASA Astrophysics Data System (ADS)
Araneda, René; Aros, Rodrigo; Miskovic, Olivera; Olea, Rodrigo
2016-04-01
We provide a fully covariant expression for the diffeomorphic charge in four-dimensional anti-de Sitter gravity, when the Gauss-Bonnet and Pontryagin terms are added to the action. The couplings of these topological invariants are such that the Weyl tensor and its dual appear in the on-shell variation of the action and such that the action is stationary for asymptotic (anti-)self-dual solutions in the Weyl tensor. In analogy with Euclidean electromagnetism, whenever the self-duality condition is global, both the action and the total charge are identically vanishing. Therefore, for such configurations, the magnetic mass equals the Ashtekhar-Magnon-Das definition.
Canonical energy and hairy AdS black holes
NASA Astrophysics Data System (ADS)
Hyun, Seungjoon; Park, Sang-A.; Yi, Sang-Heon
2016-08-01
We propose the modified version of the canonical energy which was introduced originally by Hollands and Wald. Our construction depends only on the Euler-Lagrange expression of the system and thus is independent of the ambiguity in the Lagrangian. After some comments on our construction, we briefly mention on the relevance of our construction to the boundary information metric in the context of the AdS/CFT correspondence. We also study the stability of three-dimensional hairy extremal black holes by using our construction.
Genetic Variance for Body Size in a Natural Population of Drosophila Buzzatii
Ruiz, A.; Santos, M.; Barbadilla, A.; Quezada-Diaz, J. E.; Hasson, E.; Fontdevila, A.
1991-01-01
Previous work has shown thorax length to be under directional selection in the Drosophila buzzatii population of Carboneras. In order to predict the genetic consequences of natural selection, genetic variation for this trait was investigated in two ways. First, narrow sense heritability was estimated in the laboratory F(2) generation of a sample of wild flies by means of the offspring-parent regression. A relatively high value, 0.59, was obtained. Because the phenotypic variance of wild flies was 7-9 times that of the flies raised in the laboratory, ``natural'' heritability may be estimated as one-seventh to one-ninth that value. Second, the contribution of the second and fourth chromosomes, which are polymorphic for paracentric inversions, to the genetic variance of thorax length was estimated in the field and in the laboratory. This was done with the assistance of a simple genetic model which shows that the variance among chromosome arrangements and the variance among karyotypes provide minimum estimates of the chromosome's contribution to the additive and genetic variances of the triat, respectively. In males raised under optimal conditions in the laboratory, the variance among second-chromosome karyotypes accounted for 11.43% of the total phenotypic variance and most of this variance was additive; by contrast, the contribution of the fourth chromosome was nonsignificant. The variance among second-chromosome karyotypes accounted for 1.56-1.78% of the total phenotypic variance in wild males and was nonsignificant in wild females. The variance among fourth chromosome karyotypes accounted for 0.14-3.48% of the total phenotypic variance in wild flies. At both chromosomes, the proportion of additive variance was higher in mating flies than in nonmating flies. PMID:1916242
Lifshitz-like systems and AdS null deformations
Narayan, K.
2011-10-15
Following K. Balasubramanian and K. Narayan [J. High Energy Phys. 08 (2010) 014], we discuss certain lightlike deformations of AdS{sub 5}xX{sup 5} in type IIB string theory sourced by a lightlike dilaton {Phi}(x{sup +}) dual to the N=4 super Yang-Mills theory with a lightlike varying gauge coupling. We argue that, in the case where the x{sup +} direction is noncompact, these solutions describe anisotropic 3+1-dim Lifshitz-like systems with a potential in the x{sup +} direction generated by the lightlike dilaton. We then describe solutions of this sort with a linear dilaton. This enables a detailed calculation of two-point correlation functions of operators dual to bulk scalars and helps illustrate the spatial structure of these theories. Following this, we discuss a nongeometric string construction involving a compactification along the x{sup +} direction of this linear dilaton system. We also point out similar IIB axionic solutions. Similar bulk arguments for x{sup +}-noncompact can be carried out for deformations of AdS{sub 4}xX{sup 7} in M theory.
AdS black holes from duality in gauged supergravity
NASA Astrophysics Data System (ADS)
Halmagyi, Nick; Vanel, Thomas
2014-04-01
We study and utilize duality transformations in a particular STU-model of four dimensional gauged supergravity. This model is a truncation of the de Wit-Nicolai =8 theory and as such has a lift to eleven-dimensional supergravity on the seven-sphere. Our duality group is U(1)3 and while it can be applied to any solution of this theory, we consider known asymptotically AdS4, supersymmetric black holes and focus on duality transformations which preserve supersymmetry. For static black holes we generalize the supersymmetric solutions of Cacciatori and Klemm from three magnetic charges to include two additional electric charges and argue that this is co-dimension one in the full space of supersymmetric static black holes in the STU-model. These new static black holes have nontrivial profiles for axions. For rotating black holes, we generalize the known two-parameter supersymmetric solution to include an additional parameter. When lifted to M-theory, these black holes correspond to the near horizon geometry of a stack of BPS rotating M2-branes, spinning on an S 7 which is fibered non-trivially over a Riemann surface.
Stability of charged global AdS4 spacetimes
NASA Astrophysics Data System (ADS)
Arias, Raúl; Mas, Javier; Serantes, Alexandre
2016-09-01
We study linear and nonlinear stability of asymptotically AdS4 solutions in Einstein-Maxwell-scalar theory. After summarizing the set of static solutions we first examine thermodynamical stability in the grand canonical ensemble and the phase transitions that occur among them. In the second part of the paper we focus on nonlinear stability in the microcanonical ensemble by evolving radial perturbations numerically. We find hints of an instability corner for vanishingly small perturbations of the same kind as the ones present in the uncharged case. Collapses are avoided, instead, if the charge and mass of the perturbations come to close the line of solitons. Finally we examine the soliton solutions. The linear spectrum of normal modes is not resonant and instability turns on at extrema of the mass curve. Linear stability extends to nonlinear stability up to some threshold for the amplitude of the perturbation. Beyond that, the soliton is destroyed and collapses to a hairy black hole. The relative width of this stability band scales down with the charge Q, and does not survive the blow up limit to a planar geometry.
Low complex subspace minimum variance beamformer for medical ultrasound imaging.
Deylami, Ali Mohades; Asl, Babak Mohammadzadeh
2016-03-01
Minimum variance (MV) beamformer enhances the resolution and contrast in the medical ultrasound imaging at the expense of higher computational complexity with respect to the non-adaptive delay-and-sum beamformer. The major complexity arises from the estimation of the L×L array covariance matrix using spatial averaging, which is required to more accurate estimation of the covariance matrix of correlated signals, and inversion of it, which is required for calculating the MV weight vector which are as high as O(L(2)) and O(L(3)), respectively. Reducing the number of array elements decreases the computational complexity but degrades the imaging resolution. In this paper, we propose a subspace MV beamformer which preserves the advantages of the MV beamformer with lower complexity. The subspace MV neglects some rows of the array covariance matrix instead of reducing the array size. If we keep η rows of the array covariance matrix which leads to a thin non-square matrix, the weight vector of the subspace beamformer can be achieved in the same way as the MV obtains its weight vector with lower complexity as high as O(η(2)L). More calculations would be saved because an η×L covariance matrix must be estimated instead of a L×L. We simulated a wire targets phantom and a cyst phantom to evaluate the performance of the proposed beamformer. The results indicate that we can keep about 16 from 43 rows of the array covariance matrix which reduces the order of complexity to 14% while the image resolution is still comparable to that of the standard MV beamformer. We also applied the proposed method to an experimental RF data and showed that the subspace MV beamformer performs like the standard MV with lower computational complexity.
Local variance for multi-scale analysis in geomorphometry
Drăguţ, Lucian; Eisank, Clemens; Strasser, Thomas
2011-01-01
Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements. PMID:21779138
Local variance for multi-scale analysis in geomorphometry.
Drăguţ, Lucian; Eisank, Clemens; Strasser, Thomas
2011-07-15
Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements. PMID:21779138
Modern diet and metabolic variance – a recipe for disaster?
2014-01-01
Objective Recently, a positive correlation between alanine transaminase activity and body mass was established among healthy young individuals of normal weight. Here we explore further this relationship and propose a physiological rationale for this link. Design Cross-sectional statistical analysis of adiposity across large samples of adults differing by age, diet and lifestyle. Subjects 46,684 19–20 years old Swiss male conscripts and published data on 1000 Eskimos, 518 Toronto residents and 97,000 North American Adventists. Measurements Serum concentrations of the alanine transaminase, post-prandial glucose levels, cholesterol, body height and weight, blood pressure and routine blood analysis (thrombocytes and leukocytes) for Swiss conscripts. Adiposity measures and dietary information for other groups were also obtained. Results Stepwise multiple regression after correction for random errors of physiological tests showed that 28% of the total variance in body mass is associated with ALT concentrations. This relationship remained significant when only metabolically healthy (as defined by the American Heart Association) Swiss conscripts were selected. The data indicated that high protein only or high carbohydrate only diets are associated with lower levels of obesity than a diet combining proteins and carbohydrates. Conclusion Elevated levels of alanine transaminase, and likely other transaminases, may result in overactivity of the alanine cycle that produces pyruvate from protein. When a mixed meal of protein, carbohydrate and fat is consumed, carbohydrates and fats are digested faster and metabolised to satisfy body’s energetic needs while slower digested protein is ultimately converted to malonyl CoA and stored as fat. Chronicity of this sequence is proposed to cause accumulation of somatic fat stores and thus obesity. PMID:24502225
Propagation of variance uncertainty calculation for an autopsy tissue analysis
Bruckner, L.A.
1994-07-01
When a radiochemical analysis is reported, it is often accompanied by an uncertainty value that simply reflects the natural variation in the observed counts due to radioactive decay, the so-called counting statistics. However, when the assay procedure is complex or when the number of counts is large, there are usually other important contributors to the total measurement uncertainty that need to be considered. An assay value is almost useless unless it is accompanied by a measure of the uncertainty associated with that value. The uncertainty value should reflect all the major sources of variation and bias affecting the assay and should provide a specified level of confidence. An approach to uncertainty calculation that includes the uncertainty due to instrument calibration, values of the standards, and intermediate measurements as well as counting statistics is presented and applied to the analysis of an autopsy tissue. This approach, usually called propagation of variance, attempts to clearly distinguish between errors that have systematic (bias) effects and those that have random effects on the assays. The effects of these different types of errors are then propagated to the assay using formal statistical techniques. The result is an uncertainty on the assay that has a defensible level of confidence and which can be traced to individual major contributors. However, since only measurement steps are readly quantified and since all models are approximations, it is emphasized that without empirical verification, a propagation of uncertainty model may be just a fancy model with no connection to reality. 5 refs., 1 fig., 2 tab.
Exemplar Variance Supports Robust Learning of Facial Identity
2015-01-01
Differences in the visual processing of familiar and unfamiliar faces have prompted considerable interest in face learning, the process by which unfamiliar faces become familiar. Previous work indicates that face learning is determined in part by exposure duration; unsurprisingly, viewing faces for longer affords superior performance on subsequent recognition tests. However, there has been further speculation that exemplar variation, experience of different exemplars of the same facial identity, contributes to face learning independently of viewing time. Several leading accounts of face learning, including the averaging and pictorial coding models, predict an exemplar variation advantage. Nevertheless, the exemplar variation hypothesis currently lacks empirical support. The present study therefore sought to test this prediction by comparing the effects of unique exemplar face learning—a condition rich in exemplar variation—and repeated exemplar face learning—a condition that equates viewing time, but constrains exemplar variation. Crucially, observers who received unique exemplar learning displayed better recognition of novel exemplars of the learned identities at test, than observers in the repeated exemplar condition. These results have important theoretical and substantive implications for models of face learning and for approaches to face training in applied contexts. PMID:25867504
Senior, Alistair M; Gosby, Alison K; Lu, Jing; Simpson, Stephen J; Raubenheimer, David
2016-01-01
Meta-analysis, which drives evidence-based practice, typically focuses on the average response of subjects to a treatment. For instance in nutritional research the difference in average weight of participants on different diets is typically used to draw conclusions about the relative efficacy of interventions. As a result of their focus on the mean, meta-analyses largely overlook the effects of treatments on inter-subject variability. Recent tools from the study of biological evolution, where inter-individual variability is one of the key ingredients for evolution by natural selection, now allow us to study inter-subject variability using established meta-analytic models. Here we use meta-analysis to study how low carbohydrate (LC) ad libitum diets and calorie restricted diets affect variance in mass. We find that LC ad libitum diets may have a more variable outcome than diets that prescribe a reduced calorie intake. Our results suggest that whilst LC diets are effective in a large proportion of the population, for a subset of individuals, calorie restricted diets may be more effective. There is evidence that LC ad libitum diets rely on appetite suppression to drive weight loss. Extending this hypothesis, we suggest that between-individual variability in protein appetite may drive the trends that we report. A priori identification of an individual's target intake for protein may help define the most effective dietary intervention to prescribe for weight loss. PMID:27491895
Senior, Alistair M; Gosby, Alison K; Lu, Jing; Simpson, Stephen J; Raubenheimer, David
2016-01-01
Meta-analysis, which drives evidence-based practice, typically focuses on the average response of subjects to a treatment. For instance in nutritional research the difference in average weight of participants on different diets is typically used to draw conclusions about the relative efficacy of interventions. As a result of their focus on the mean, meta-analyses largely overlook the effects of treatments on inter-subject variability. Recent tools from the study of biological evolution, where inter-individual variability is one of the key ingredients for evolution by natural selection, now allow us to study inter-subject variability using established meta-analytic models. Here we use meta-analysis to study how low carbohydrate (LC) ad libitum diets and calorie restricted diets affect variance in mass. We find that LC ad libitum diets may have a more variable outcome than diets that prescribe a reduced calorie intake. Our results suggest that whilst LC diets are effective in a large proportion of the population, for a subset of individuals, calorie restricted diets may be more effective. There is evidence that LC ad libitum diets rely on appetite suppression to drive weight loss. Extending this hypothesis, we suggest that between-individual variability in protein appetite may drive the trends that we report. A priori identification of an individual's target intake for protein may help define the most effective dietary intervention to prescribe for weight loss.
Senior, Alistair M.; Gosby, Alison K.; Lu, Jing; Simpson, Stephen J.; Raubenheimer, David
2016-01-01
Meta-analysis, which drives evidence-based practice, typically focuses on the average response of subjects to a treatment. For instance in nutritional research the difference in average weight of participants on different diets is typically used to draw conclusions about the relative efficacy of interventions. As a result of their focus on the mean, meta-analyses largely overlook the effects of treatments on inter-subject variability. Recent tools from the study of biological evolution, where inter-individual variability is one of the key ingredients for evolution by natural selection, now allow us to study inter-subject variability using established meta-analytic models. Here we use meta-analysis to study how low carbohydrate (LC) ad libitum diets and calorie restricted diets affect variance in mass. We find that LC ad libitum diets may have a more variable outcome than diets that prescribe a reduced calorie intake. Our results suggest that whilst LC diets are effective in a large proportion of the population, for a subset of individuals, calorie restricted diets may be more effective. There is evidence that LC ad libitum diets rely on appetite suppression to drive weight loss. Extending this hypothesis, we suggest that between-individual variability in protein appetite may drive the trends that we report. A priori identification of an individual’s target intake for protein may help define the most effective dietary intervention to prescribe for weight loss. PMID:27491895
Doppler Lidar Vertical Velocity Statistics Value-Added Product
Newsom, R. K.; Sivaraman, C.; Shippert, T. R.; Riihimaki, L. D.
2015-07-01
Accurate height-resolved measurements of higher-order statistical moments of vertical velocity fluctuations are crucial for improved understanding of turbulent mixing and diffusion, convective initiation, and cloud life cycles. The Atmospheric Radiation Measurement (ARM) Climate Research Facility operates coherent Doppler lidar systems at several sites around the globe. These instruments provide measurements of clear-air vertical velocity profiles in the lower troposphere with a nominal temporal resolution of 1 sec and height resolution of 30 m. The purpose of the Doppler lidar vertical velocity statistics (DLWSTATS) value-added product (VAP) is to produce height- and time-resolved estimates of vertical velocity variance, skewness, and kurtosis from these raw measurements. The VAP also produces estimates of cloud properties, including cloud-base height (CBH), cloud frequency, cloud-base vertical velocity, and cloud-base updraft fraction.
Amygdalar enlargement associated with unique perception.
Asari, Tomoki; Konishi, Seiki; Jimura, Koji; Chikazoe, Junichi; Nakamura, Noriko; Miyashita, Yasushi
2010-01-01
Interference by amygdalar activity in perceptual processes has been reported in many previous studies. Consistent with these reports, previous clinical studies have shown amygdalar volume change in multiple types of psychotic disease presenting with unusual perception. However, the relationship between variation in amygdalar volume in the normal population and the tendency toward unusual or unique perception has never been investigated. To address this issue, we defined an index to represent the tendency toward unique perception using ambiguous stimuli: subjects were instructed to state what the figures looked like to them, and "unique responses" were defined depending on the appearance frequency of the same responses in an age- and gender-matched control group. The index was defined as the ratio of unique responses to total responses per subject. We obtained structural brain images and values of the index from sixty-eight normal subjects. Voxel-based morphometry analyses revealed a positive correlation between amygdalar volume and the index. Since previous reports have indicated that unique responses were observed at higher frequency in the artistic population than in the nonartistic normal population, this positive correlation suggests that amygdalar enlargement in the normal population might be related to creative mental activity.
ERIC Educational Resources Information Center
Heene, Moritz; Hilbert, Sven; Draxler, Clemens; Ziegler, Matthias; Buhner, Markus
2011-01-01
Fit indices are widely used in order to test the model fit for structural equation models. In a highly influential study, Hu and Bentler (1999) showed that certain cutoff values for these indices could be derived, which, over time, has led to the reification of these suggested thresholds as "golden rules" for establishing the fit or other aspects…
Commentary: is Alzheimer's disease uniquely human?
Finch, Caleb E.; Austad, Steven N.
2015-01-01
That Alzheimer's disease (AD) might be a human-specific disease was hypothesized by Rapoport in 1989. Apes and humans share an identical amyloid beta (Aβ) peptide amino acid sequence and accumulate considerable Aβ deposits after age 40 years, an age when amyloid plaques are uncommon in humans. Despite their early Aβ buildup, ape brains have not shown evidence dystrophic neurites near plaques. Aging great ape brains also have few neurofibrillary tangles, with one exception of 1 obese chimpanzee euthanized after a stroke who displayed abundant neurofibrillary tangles, but without the typical AD distribution. We discuss the need for more exacting evaluation of neuron density with age, and note husbandry issues that may allow great apes to live to greater ages. We remain reserved about expectations for fully developed AD-like pathology in the great apes of advanced ages and cautiously support Rapoport's hypothesis. PMID:25533426
Different Flavonoids Can Shape Unique Gut Microbiota Profile In Vitro.
Huang, Jiacheng; Chen, Long; Xue, Bin; Liu, Qianyue; Ou, Shiyi; Wang, Yong; Peng, Xichun
2016-09-01
The impact of flavonoids has been discussed on the relative viability of bacterial groups in human microbiota. This study was aimed to compare the modulation of various flavonoids, including quercetin, catechin and puerarin, on gut microbiota culture in vitro, and analyze the interactions between bacterial species using fructo-oligosaccharide (FOS) as carbon source under the stress of flavonoids. Three plant flavonoids, quercetin, catechin, and puerarin, were added into multispecies culture to ferment for 24 h, respectively. The bacterial 16S rDNA amplicons were sequenced, and the composition of microbiota community was analyzed. The results revealed that the tested flavonoids, quercetin, catechin, and puerarin, presented different activities of regulating gut microbiota; flavonoid aglycones, but not glycosides, may inhibit growth of certain species. Quercetin and catechin shaped unique biological webs. Bifidobacterium spp. was the center of the biological web constructed in this study.
Different Flavonoids Can Shape Unique Gut Microbiota Profile In Vitro.
Huang, Jiacheng; Chen, Long; Xue, Bin; Liu, Qianyue; Ou, Shiyi; Wang, Yong; Peng, Xichun
2016-09-01
The impact of flavonoids has been discussed on the relative viability of bacterial groups in human microbiota. This study was aimed to compare the modulation of various flavonoids, including quercetin, catechin and puerarin, on gut microbiota culture in vitro, and analyze the interactions between bacterial species using fructo-oligosaccharide (FOS) as carbon source under the stress of flavonoids. Three plant flavonoids, quercetin, catechin, and puerarin, were added into multispecies culture to ferment for 24 h, respectively. The bacterial 16S rDNA amplicons were sequenced, and the composition of microbiota community was analyzed. The results revealed that the tested flavonoids, quercetin, catechin, and puerarin, presented different activities of regulating gut microbiota; flavonoid aglycones, but not glycosides, may inhibit growth of certain species. Quercetin and catechin shaped unique biological webs. Bifidobacterium spp. was the center of the biological web constructed in this study. PMID:27472307
Asymptotic structure of the Einstein-Maxwell theory on AdS3
NASA Astrophysics Data System (ADS)
Pérez, Alfredo; Riquelme, Miguel; Tempo, David; Troncoso, Ricardo
2016-02-01
The asymptotic structure of AdS spacetimes in the context of General Relativity coupled to the Maxwell field in three spacetime dimensions is analyzed. Although the fall-off of the fields is relaxed with respect to that of Brown and Henneaux, the variation of the canonical generators associated to the asymptotic Killing vectors can be shown to be finite once required to span the Lie derivative of the fields. The corresponding surface integrals then acquire explicit contributions from the electromagnetic field, and become well-defined provided they fulfill suitable integrability conditions, implying that the leading terms of the asymptotic form of the electromagnetic field are functionally related. Consequently, for a generic choice of boundary conditions, the asymptotic symmetries are broken down to {R}⊗ U(1)⊗ U(1) . Nonetheless, requiring compatibility of the boundary conditions with one of the asymptotic Virasoro symmetries, singles out the set to be characterized by an arbitrary function of a single variable, whose precise form depends on the choice of the chiral copy. Remarkably, requiring the asymptotic symmetries to contain the full conformal group selects a very special set of boundary conditions that is labeled by a unique constant parameter, so that the algebra of the canonical generators is given by the direct sum of two copies of the Virasoro algebra with the standard central extension and U (1). This special set of boundary conditions makes the energy spectrum of electrically charged rotating black holes to be well-behaved.
Effects of noise variance model on optimal feedback design and actuator placement
NASA Technical Reports Server (NTRS)
Ruan, Mifang; Choudhury, Ajit K.
1994-01-01
In optimal placement of actuators for stochastic systems, it is commonly assumed that the actuator noise variances are not related to the feedback matrix and the actuator locations. In this paper, we will discuss the limitation of that assumption and develop a more practical noise variance model. Various properties associated with optimal actuator placement under the assumption of this noise variance model are discovered through the analytical study of a second order system.
1987-02-20
3 CBS-owned television stations and NBC's New York television station announced yesterday that they would begin accepting condom advertising. In addition, the ABC network announced it will begin running a 30-second public service message with Dr. C. Everett Koop, the US surgeon general, saying that condoms are the best protection against sexual transmission of AIDS. CBS said it will allow the 4 television stations and 18 radio stations it owns to accept condom advertising based on the attitudes of the local viewing or listening community. WCBS-TV in New York, WCAU-TV in Philadelphia and KCBS-TV in Los Angeles said they would accept such ads. CBS also owns a television station in Chicago. WCAU will air condom ads after 11 p.m. only, beginning probably next week, said Paul Webb, a station spokesman. "We recognize the legitimate sensitivities of some members of the community in regard to this issue," said Steve Cohen, the WCAU general manager. "However, it is the judgment of this station that the importance of providing information about the AIDS epidemic and means of prevention is an overriding consideration." NBC's New York television station, WNBC, announced that it will accept condom advertising and public service announcements. PMID:12269166
Unique sugar metabolic pathways of bifidobacteria.
Fushinobu, Shinya
2010-01-01
Bifidobacteria have many beneficial effects for human health. The gastrointestinal tract, where natural colonization of bifidobacteria occurs, is an environment poor in nutrition and oxygen. Therefore, bifidobacteria have many unique glycosidases, transporters, and metabolic enzymes for sugar fermentation to utilize diverse carbohydrates that are not absorbed by host humans and animals. They have a unique, effective central fermentative pathway called bifid shunt. Recently, a novel metabolic pathway that utilizes both human milk oligosaccharides and host glycoconjugates was found. The galacto-N-biose/lacto-N-biose I metabolic pathway plays a key role in colonization in the infant gastrointestinal tract. These pathways involve many unique enzymes and proteins. This review focuses on their molecular mechanisms, as revealed by biochemical and crystallographic studies.
Forsberg, Simon K G; Andreatta, Matthew E; Huang, Xin-Yuan; Danku, John; Salt, David E; Carlborg, Örjan
2015-11-01
Genome-wide association (GWA) analyses have generally been used to detect individual loci contributing to the phenotypic diversity in a population by the effects of these loci on the trait mean. More rarely, loci have also been detected based on variance differences between genotypes. Several hypotheses have been proposed to explain the possible genetic mechanisms leading to such variance signals. However, little is known about what causes these signals, or whether this genetic variance-heterogeneity reflects mechanisms of importance in natural populations. Previously, we identified a variance-heterogeneity GWA (vGWA) signal for leaf molybdenum concentrations in Arabidopsis thaliana. Here, fine-mapping of this association reveals that the vGWA emerges from the effects of three independent genetic polymorphisms that all are in strong LD with the markers displaying the genetic variance-heterogeneity. By revealing the genetic architecture underlying this vGWA signal, we uncovered the molecular source of a significant amount of hidden additive genetic variation or "missing heritability". Two of the three polymorphisms underlying the genetic variance-heterogeneity are promoter variants for Molybdate transporter 1 (MOT1), and the third a variant located ~25 kb downstream of this gene. A fourth independent association was also detected ~600 kb upstream of MOT1. Use of a T-DNA knockout allele highlights Copper Transporter 6; COPT6 (AT2G26975) as a strong candidate gene for this association. Our results show that an extended LD across a complex locus including multiple functional alleles can lead to a variance-heterogeneity between genotypes in natural populations. Further, they provide novel insights into the genetic regulation of ion homeostasis in A. thaliana, and empirically confirm that variance-heterogeneity based GWA methods are a valuable tool to detect novel associations of biological importance in natural populations.
Forsberg, Simon K. G.; Andreatta, Matthew E.; Huang, Xin-Yuan; Danku, John; Salt, David E.; Carlborg, Örjan
2015-01-01
Genome-wide association (GWA) analyses have generally been used to detect individual loci contributing to the phenotypic diversity in a population by the effects of these loci on the trait mean. More rarely, loci have also been detected based on variance differences between genotypes. Several hypotheses have been proposed to explain the possible genetic mechanisms leading to such variance signals. However, little is known about what causes these signals, or whether this genetic variance-heterogeneity reflects mechanisms of importance in natural populations. Previously, we identified a variance-heterogeneity GWA (vGWA) signal for leaf molybdenum concentrations in Arabidopsis thaliana. Here, fine-mapping of this association reveals that the vGWA emerges from the effects of three independent genetic polymorphisms that all are in strong LD with the markers displaying the genetic variance-heterogeneity. By revealing the genetic architecture underlying this vGWA signal, we uncovered the molecular source of a significant amount of hidden additive genetic variation or “missing heritability”. Two of the three polymorphisms underlying the genetic variance-heterogeneity are promoter variants for Molybdate transporter 1 (MOT1), and the third a variant located ~25 kb downstream of this gene. A fourth independent association was also detected ~600 kb upstream of MOT1. Use of a T-DNA knockout allele highlights Copper Transporter 6; COPT6 (AT2G26975) as a strong candidate gene for this association. Our results show that an extended LD across a complex locus including multiple functional alleles can lead to a variance-heterogeneity between genotypes in natural populations. Further, they provide novel insights into the genetic regulation of ion homeostasis in A. thaliana, and empirically confirm that variance-heterogeneity based GWA methods are a valuable tool to detect novel associations of biological importance in natural populations. PMID:26599497
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-17
... requesting a variance from vegetation standards for levees and floodwalls to reflect organizational changes... as armoring or overbuilt sections) intended to preserve system reliability and resiliency...
75 FR 6364 - Process for Requesting a Variance From Vegetation Standards for Levees and Floodwalls
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-09
... requesting a variance from vegetation standards for levees and floodwalls to reflect organizational changes...) intended to preserve system reliability and resiliency by preventing or mitigating vegetation impacts....
40 CFR 142.303 - Which size public water systems can receive a small system variance?
Code of Federal Regulations, 2011 CFR
2011-07-01
... receive a small system variance? (a) A State exercising primary enforcement responsibility for public..., a State exercising primary enforcement responsibility for public water systems may grant a...
Encoding of natural sounds by variance of the cortical local field potential.
Ding, Nai; Simon, Jonathan Z; Shamma, Shihab A; David, Stephen V
2016-06-01
Neural encoding of sensory stimuli is typically studied by averaging neural signals across repetitions of the same stimulus. However, recent work has suggested that the variance of neural activity across repeated trials can also depend on sensory inputs. Here we characterize how intertrial variance of the local field potential (LFP) in primary auditory cortex of awake ferrets is affected by continuous natural sound stimuli. We find that natural sounds often suppress the intertrial variance of low-frequency LFP (<16 Hz). However, the amount of the variance reduction is not significantly correlated with the amplitude of the mean response at the same recording site. Moreover, the variance changes occur with longer latency than the mean response. Although the dynamics of the mean response and intertrial variance differ, spectro-temporal receptive field analysis reveals that changes in LFP variance have frequency tuning similar to multiunit activity at the same recording site, suggesting a local origin for changes in LFP variance. In summary, the spectral tuning of LFP intertrial variance and the absence of a correlation with the amplitude of the mean evoked LFP suggest substantial heterogeneity in the interaction between spontaneous and stimulus-driven activity across local neural populations in auditory cortex. PMID:26912594
Allan, David W; Levine, Judah
2016-04-01
Over the past 50 years, variances have been developed for characterizing the instabilities of precision clocks and oscillators. These instabilities are often modeled as nonstationary processes, and the variances have been shown to be well-behaved and to be unbiased, efficient descriptors of these types of processes. This paper presents a historical overview of the development of these variances. The time-domain and frequency-domain formulations are presented and their development is described. The strengths and weaknesses of these characterization metrics are discussed. These variances are also shown to be useful in other applications, such as in telecommunication.
Uniquely designed nuclear structures of lower eukaryotes.
Iwamoto, Masaaki; Hiraoka, Yasushi; Haraguchi, Tokuko
2016-06-01
The nuclear structures of lower eukaryotes, specifically protists, often vary from those of yeasts and metazoans. Several studies have demonstrated the unique and fascinating features of these nuclear structures, such as a histone-independent condensed chromatin in dinoflagellates and two structurally distinct nuclear pore complexes in ciliates. Despite their unique molecular/structural features, functions required for formation of their cognate molecules/structures are highly conserved. This provides important information about the structure-function relationship of the nuclear structures. In this review, we highlight characteristic nuclear structures found in lower eukaryotes, and discuss their attractiveness as potential biological systems for studying nuclear structures.
Uniqueness of Nash equilibrium in vaccination games.
Bai, Fan
2016-12-01
One crucial condition for the uniqueness of Nash equilibrium set in vaccination games is that the attack ratio monotonically decreases as the vaccine coverage level increasing. We consider several deterministic vaccination models in homogeneous mixing population and in heterogeneous mixing population. Based on the final size relations obtained from the deterministic epidemic models, we prove that the attack ratios can be expressed in terms of the vaccine coverage levels, and also prove that the attack ratios are decreasing functions of vaccine coverage levels. Some thresholds are presented, which depend on the vaccine efficacy. It is proved that for vaccination games in homogeneous mixing population, there is a unique Nash equilibrium for each game.
Transcriptomics exposes the uniqueness of parasitic plants.
Ichihashi, Yasunori; Mutuku, J Musembi; Yoshida, Satoko; Shirasu, Ken
2015-07-01
Parasitic plants have the ability to obtain nutrients directly from other plants, and several species are serious biological threats to agriculture by parasitizing crops of high economic importance. The uniqueness of parasitic plants is characterized by the presence of a multicellular organ called a haustorium, which facilitates plant-plant interactions, and shutting down or reducing their own photosynthesis. Current technical advances in next-generation sequencing and bioinformatics have allowed us to dissect the molecular mechanisms behind the uniqueness of parasitic plants at the genome-wide level. In this review, we summarize recent key findings mainly in transcriptomics that will give us insights into the future direction of parasitic plant research.
Unique Phase Recovery for Nonperiodic Objects
NASA Astrophysics Data System (ADS)
Nugent, K. A.; Peele, A. G.; Chapman, H. N.; Mancuso, A. P.
2003-11-01
It is well known that the loss of phase information at detection means that a diffraction pattern may be consistent with a multitude of physically different structures. This Letter shows that it is possible to perform unique structural determination in the absence of a priori information using x-ray fields with phase curvature. We argue that significant phase curvature is already available using modern x-ray optics and we demonstrate an algorithm that allows the phase to be recovered uniquely and reliably.
Unique forbidden beta decays and neutrino mass
Dvornický, Rastislav; Šimkovic, Fedor
2015-10-28
The measurement of the electron energy spectrum in single β decays close to the endpoint provides a direct determination of the neutrino masses. The most sensitive experiments use β decays with low Q value, e.g. KATRIN (tritium) and MARE (rhenium). We present the theoretical spectral shape of electrons emitted in the first, second, and fourth unique forbidden β decays. Our findings show that the Kurie functions for these unique forbidden β transitions are linear in the limit of massless neutrinos like the Kurie function of the allowed β decay of tritium.
Treatability Variance for Containerised Liquids in Mixed Debris Waste - 12101
Alstatt, Catherine M.
2012-07-01
containers with incidental amounts of liquids, even if the liquid is less than 50% of the total waste volume. Under the proposed variance, all free or containerised liquids (up to 3.8 liters(L)) found in the debris would be treated and returned in solid form to the debris waste stream from which they originated. The waste would then be macro-encapsulated. (author)