Paired-Associate Learning Ability Accounts for Unique Variance in Orthographic Learning
ERIC Educational Resources Information Center
Wang, Hua-Chen; Wass, Malin; Castles, Anne
2017-01-01
Paired-associate learning is a dynamic measure of the ability to form new links between two items. This study aimed to investigate whether paired-associate learning ability is associated with success in orthographic learning, and if so, whether it accounts for unique variance beyond phonological decoding ability and orthographic knowledge. A group…
On the Relations among Regular, Equal Unique Variances, and Image Factor Analysis Models.
ERIC Educational Resources Information Center
Hayashi, Kentaro; Bentler, Peter M.
2000-01-01
Investigated the conditions under which the matrix of factor loadings from the factor analysis model with equal unique variances will give a good approximation to the matrix of factor loadings from the regular factor analysis model. Extends the results to the image factor analysis model. Discusses implications for practice. (SLD)
Adding a Parameter Increases the Variance of an Estimated Regression Function
ERIC Educational Resources Information Center
Withers, Christopher S.; Nadarajah, Saralees
2011-01-01
The linear regression model is one of the most popular models in statistics. It is also one of the simplest models in statistics. It has received applications in almost every area of science, engineering and medicine. In this article, the authors show that adding a predictor to a linear model increases the variance of the estimated regression…
Pearcy, Benjamin T D; McEvoy, Peter M; Roberts, Lynne D
2017-02-01
This study extends knowledge about the relationship of Internet Gaming Disorder (IGD) to other established mental disorders by exploring comorbidities with anxiety, depression, Attention Deficit Hyperactivity Disorder (ADHD), and obsessive compulsive disorder (OCD), and assessing whether IGD accounts for unique variance in distress and disability. An online survey was completed by a convenience sample that engages in Internet gaming (N = 404). Participants meeting criteria for IGD based on the Personal Internet Gaming Disorder Evaluation-9 (PIE-9) reported higher comorbidity with depression, OCD, ADHD, and anxiety compared with those who did not meet the IGD criteria. IGD explained a small proportion of unique variance in distress (1%) and disability (3%). IGD accounted for a larger proportion of unique variance in disability than anxiety and ADHD, and a similar proportion to depression. Replications with clinical samples using longitudinal designs and structured diagnostic interviews are required.
Wilkinson, Eduan; Holzmayer, Vera; Jacobs, Graeme B.; de Oliveira, Tulio; Brennan, Catherine A.; Hackett, John; van Rensburg, Estrelita Janse
2015-01-01
Abstract By the end of 2012, more than 6.1 million people were infected with HIV-1 in South Africa. Subtype C was responsible for the majority of these infections and more than 300 near full-length genomes (NFLGs) have been published. Currently very few non-subtype C isolates have been identified and characterized within the country, particularly full genome non-C isolates. Seven patients from the Tygerberg Virology (TV) cohort were previously identified as possible non-C subtypes and were selected for further analyses. RNA was isolated from five individuals (TV047, TV096, TV101, TV218, and TV546) and DNA from TV016 and TV1057. The NFLGs of these samples were amplified in overlapping fragments and sequenced. Online subtyping tools REGA version 3 and jpHMM were used to screen for subtypes and recombinants. Maximum likelihood (ML) phylogenetic analysis (phyML) was used to infer subtypes and SimPlot was used to confirm possible intersubtype recombinants. We identified three subtype B (TV016, TV047, and TV1057) isolates, one subtype A1 (TV096), one subtype G (TV546), one unique AD (TV101), and one unique AC (TV218) recombinant form. This is the first NFLG of subtype G that has been described in South Africa. The subtype B sequences described also increased the NFLG subtype B sequences in Africa from three to six. There is a need for more NFLG sequences, as partial HIV-1 sequences may underrepresent viral recombinant forms. It is also necessary to continue monitoring the evolution and spread of HIV-1 in South Africa, because understanding viral diversity may play an important role in HIV-1 prevention strategies. PMID:25492033
Zhao, Yuhai; Pogue, Aileen I; Lukiw, Walter J
2015-12-17
Of the approximately ~2.65 × 10³ mature microRNAs (miRNAs) so far identified in Homo sapiens, only a surprisingly small but select subset-about 35-40-are highly abundant in the human central nervous system (CNS). This fact alone underscores the extremely high selection pressure for the human CNS to utilize only specific ribonucleotide sequences contained within these single-stranded non-coding RNAs (ncRNAs) for productive miRNA-mRNA interactions and the down-regulation of gene expression. In this article we will: (i) consolidate some of our still evolving ideas concerning the role of miRNAs in the CNS in normal aging and in health, and in sporadic Alzheimer's disease (AD) and related forms of chronic neurodegeneration; and (ii) highlight certain aspects of the most current work in this research field, with particular emphasis on the findings from our lab of a small pathogenic family of six inducible, pro-inflammatory, NF-κB-regulated miRNAs including miRNA-7, miRNA-9, miRNA-34a, miRNA-125b, miRNA-146a and miRNA-155. This group of six CNS-abundant miRNAs significantly up-regulated in sporadic AD are emerging as what appear to be key mechanistic contributors to the sporadic AD process and can explain much of the neuropathology of this common, age-related inflammatory neurodegeneration of the human CNS.
Zhao, Yuhai; Pogue, Aileen I.; Lukiw, Walter J.
2015-01-01
Of the approximately ~2.65 × 103 mature microRNAs (miRNAs) so far identified in Homo sapiens, only a surprisingly small but select subset—about 35–40—are highly abundant in the human central nervous system (CNS). This fact alone underscores the extremely high selection pressure for the human CNS to utilize only specific ribonucleotide sequences contained within these single-stranded non-coding RNAs (ncRNAs) for productive miRNA–mRNA interactions and the down-regulation of gene expression. In this article we will: (i) consolidate some of our still evolving ideas concerning the role of miRNAs in the CNS in normal aging and in health, and in sporadic Alzheimer’s disease (AD) and related forms of chronic neurodegeneration; and (ii) highlight certain aspects of the most current work in this research field, with particular emphasis on the findings from our lab of a small pathogenic family of six inducible, pro-inflammatory, NF-κB-regulated miRNAs including miRNA-7, miRNA-9, miRNA-34a, miRNA-125b, miRNA-146a and miRNA-155. This group of six CNS-abundant miRNAs significantly up-regulated in sporadic AD are emerging as what appear to be key mechanistic contributors to the sporadic AD process and can explain much of the neuropathology of this common, age-related inflammatory neurodegeneration of the human CNS. PMID:26694372
Variance analysis by use of a low cost desk top calculator.
González Revaldería, J; Villafruela, J J; Sabater, J; Lamas, S; Ortuño, J
1986-01-01
A simple program for an HP-97 desk top calculator, which can be adapted to an HP-67, is presented. This program detects the presence of an added component of variance in any series classified with a unique criterion. Each series can be formed by any number of data. The program supplies additional information about this component. A brief theoretical description and a practical example are also included.
NASA Astrophysics Data System (ADS)
Chabuda, Krzysztof; Leroux, Ian D.; Demkowicz-Dobrzański, Rafał
2016-08-01
The instability of an atomic clock is characterized by the Allan variance, a measure widely used to describe the noise of frequency standards. We provide an explicit method to find the ultimate bound on the Allan variance of an atomic clock in the most general scenario where N atoms are prepared in an arbitrarily entangled state and arbitrary measurement and feedback are allowed, including those exploiting coherences between succeeding interrogation steps. While the method is rigorous and general, it becomes numerically challenging for large N and long averaging times.
Conversations across Meaning Variance
ERIC Educational Resources Information Center
Cordero, Alberto
2013-01-01
Progressive interpretations of scientific theories have long been denounced as naive, because of the inescapability of meaning variance. The charge reportedly applies to recent realist moves that focus on theory-parts rather than whole theories. This paper considers the question of what "theory-parts" of epistemic significance (if any) relevantly…
Spectral Ambiguity of Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
1996-01-01
We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.
Nominal analysis of "variance".
Weiss, David J
2009-08-01
Nominal responses are the natural way for people to report actions or opinions. Because nominal responses do not generate numerical data, they have been underutilized in behavioral research. On those occasions in which nominal responses are elicited, the responses are customarily aggregated over people or trials so that large-sample statistics can be employed. A new analysis is proposed that directly associates differences among responses with particular sources in factorial designs. A pair of nominal responses either matches or does not; when responses do not match, they vary. That analogue to variance is incorporated in the nominal analysis of "variance" (NANOVA) procedure, wherein the proportions of matches associated with sources play the same role as do sums of squares in an ANOVA. The NANOVA table is structured like an ANOVA table. The significance levels of the N ratios formed by comparing proportions are determined by resampling. Fictitious behavioral examples featuring independent groups and repeated measures designs are presented. A Windows program for the analysis is available.
Systems Engineering Programmatic Estimation Using Technology Variance
NASA Technical Reports Server (NTRS)
Mog, Robert A.
2000-01-01
Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed on the subsystems and components comprising the system of interest. Technological "return" and "variation" parameters are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.
Systems Engineering Programmatic Estimation Using Technology Variance
NASA Technical Reports Server (NTRS)
Mog, Robert A.
2000-01-01
Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed oil the subsystems and components comprising the system of interest. Technological "returns" and "variation" parameters, are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.
A proxy for variance in dense matching over homogeneous terrain
NASA Astrophysics Data System (ADS)
Altena, Bas; Cockx, Liesbet; Goedemé, Toon
2014-05-01
Automation in photogrammetry and avionics have brought highly autonomous UAV mapping solutions on the market. These systems have great potential for geophysical research, due to their mobility and simplicity of work. Flight planning can be done on site and orientation parameters are estimated automatically. However, one major drawback is still present: if contrast is lacking, stereoscopy fails. Consequently, topographic information cannot be obtained precisely through photogrammetry for areas with low contrast. Even though more robustness is added in the estimation through multi-view geometry, a precise product is still lacking. For the greater part, interpolation is applied over these regions, where the estimation is constrained by uniqueness, its epipolar line and smoothness. Consequently, digital surface models are generated with an estimate of the topography, without holes but also without an indication of its variance. Every dense matching algorithm is based on a similarity measure. Our methodology uses this property to support the idea that if only noise is present, no correspondence can be detected. Therefore, the noise level is estimated in respect to the intensity signal of the topography (SNR) and this ratio serves as a quality indicator for the automatically generated product. To demonstrate this variance indicator, two different case studies were elaborated. The first study is situated at an open sand mine near the village of Kiezegem, Belgium. Two different UAV systems flew over the site. One system had automatic intensity regulation, and resulted in low contrast over the sandy interior of the mine. That dataset was used to identify the weak estimations of the topography and was compared with the data from the other UAV flight. In the second study a flight campaign with the X100 system was conducted along the coast near Wenduine, Belgium. The obtained images were processed through structure-from-motion software. Although the beach had a very low
Cosmology without cosmic variance
Bernstein, Gary M.; Cai, Yan -Chuan
2011-10-01
The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing the number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.
Cosmology without cosmic variance
Bernstein, Gary M.; Cai, Yan -Chuan
2011-10-01
The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing themore » number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.« less
Understanding gender variance in children and adolescents.
Simons, Lisa K; Leibowitz, Scott F; Hidalgo, Marco A
2014-06-01
Gender variance is an umbrella term used to describe gender identity, expression, or behavior that falls outside of culturally defined norms associated with a specific gender. In recent years, growing media coverage has heightened public awareness about gender variance in childhood and adolescence, and an increasing number of referrals to clinics specializing in care for gender-variant youth have been reported in the United States. Gender-variant expression, behavior, and identity may present in childhood and adolescence in a number of ways, and youth with gender variance have unique health needs. For those experiencing gender dysphoria, or distress encountered by the discordance between biological sex and gender identity, puberty is often an exceptionally challenging time. Pediatric primary care providers may be families' first resource for education and support, and they play a critical role in supporting the health of youth with gender variance by screening for psychosocial problems and health risks, referring for gender-specific mental health and medical care, and providing ongoing advocacy and support.
Sampling Errors of Variance Components.
ERIC Educational Resources Information Center
Sanders, Piet F.
A study on sampling errors of variance components was conducted within the framework of generalizability theory by P. L. Smith (1978). The study used an intuitive approach for solving the problem of how to allocate the number of conditions to different facets in order to produce the most stable estimate of the universe score variance. Optimization…
NASA Astrophysics Data System (ADS)
Anabalón, Andrés; Astefanesei, Dumitru; Choque, David
2016-11-01
We construct exact hairy AdS soliton solutions in Einstein-dilaton gravity theory. We examine their thermodynamic properties and discuss the role of these solutions for the existence of first order phase transitions for hairy black holes. The negative energy density associated to hairy AdS solitons can be interpreted as the Casimir energy that is generated in the dual filed theory when the fermions are antiperiodic on the compact coordinate.
VPSim: Variance propagation by simulation
Burr, T.; Coulter, C.A.; Prommel, J.
1997-12-01
One of the fundamental concepts in a materials control and accountability system for nuclear safeguards is the materials balance (MB). All transfers into and out of a material balance area are measured, as are the beginning and ending inventories. The resulting MB measures the material loss, MB = T{sub in} + I{sub B} {minus} T{sub out} {minus} I{sub E}. To interpret the MB, the authors must estimate its measurement error standard deviation, {sigma}{sub MB}. When feasible, they use a method usually known as propagation of variance (POV) to estimate {sigma}{sub MB}. The application of POV for estimating the measurement error variance of an MB is straightforward but tedious. By applying POV to individual measurement error standard deviations they can estimate {sigma}{sub MB} (or more generally, they can estimate the variance-covariance matrix, {Sigma}, of a sequence of MBs). This report describes a new computer program (VPSim) that uses simulation to estimate the {Sigma} matrix of a sequence of MBs. Given the proper input data, VPSim calculates the MB and {sigma}{sub MB}, or calculates a sequence of n MBs and the associated n-by-n covariance matrix, {Sigma}. The covariance matrix, {Sigma}, contains the variance of each MB in the diagonal entries and the covariance between pairs of MBs in the off-diagonal entries.
Analysis of Variance: Variably Complex
ERIC Educational Resources Information Center
Drummond, Gordon B.; Vowler, Sarah L.
2012-01-01
These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution…
Assessment of the genetic variance of late-onset Alzheimer's disease.
Ridge, Perry G; Hoyt, Kaitlyn B; Boehme, Kevin; Mukherjee, Shubhabrata; Crane, Paul K; Haines, Jonathan L; Mayeux, Richard; Farrer, Lindsay A; Pericak-Vance, Margaret A; Schellenberg, Gerard D; Kauwe, John S K
2016-05-01
Alzheimer's disease (AD) is a complex genetic disorder with no effective treatments. More than 20 common markers have been identified, which are associated with AD. Recently, several rare variants have been identified in Amyloid Precursor Protein (APP), Triggering Receptor Expressed On Myeloid Cells 2 (TREM2) and Unc-5 Netrin Receptor C (UNC5C) that affect risk for AD. Despite the many successes, the genetic architecture of AD remains unsolved. We used Genome-wide Complex Trait Analysis to (1) estimate phenotypic variance explained by genetics; (2) calculate genetic variance explained by known AD single nucleotide polymorphisms (SNPs); and (3) identify the genomic locations of variation that explain the remaining unexplained genetic variance. In total, 53.24% of phenotypic variance is explained by genetics, but known AD SNPs only explain 30.62% of the genetic variance. Of the unexplained genetic variance, approximately 41% is explained by unknown SNPs in regions adjacent to known AD SNPs, and the remaining unexplained genetic variance outside these regions.
Variance decomposition in stochastic simulators
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Estimating the Modified Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, Charles
1995-01-01
The third-difference approach to modified Allan variance (MVAR) leads to a tractable formula for a measure of MVAR estimator confidence, the equivalent degrees of freedom (edf), in the presence of power-law phase noise. The effect of estimation stride on edf is tabulated. A simple approximation for edf is given, and its errors are tabulated. A theorem allowing conservative estimates of edf in the presence of compound noise processes is given.
... need sugar to function properly. Added sugars contribute zero nutrients but many added calories that can lead to extra pounds or even obesity, thereby reducing heart health. If you think of your daily calorie needs as a budget, you want to “spend” ...
ERIC Educational Resources Information Center
UCLA IDEA, 2012
2012-01-01
Value added measures (VAM) uses changes in student test scores to determine how much "value" an individual teacher has "added" to student growth during the school year. Some policymakers, school districts, and educational advocates have applauded VAM as a straightforward measure of teacher effectiveness: the better a teacher,…
NASA Astrophysics Data System (ADS)
Hertog, Thomas
2004-12-01
We review some properties of N=8 gauged supergravity in four dimensions with modified, but AdS invariant boundary conditions on the m2 = -2 scalars. There is a one-parameter class of asymptotic conditions on these fields and the metric components, for which the full AdS symmetry group is preserved. The generators of the asymptotic symmetries are finite, but acquire a contribution from the scalar fields. For a large class of such boundary conditions, we find there exist black holes with scalar hair that are specified by a single conserved charge. Since Schwarschild-AdS is a solution too for all boundary conditions, this provides an example of black hole non-uniqueness. We also show there exist solutions where smooth initial data evolve to a big crunch singularity. This opens up the possibility of using the dual conformal field theory to obtain a fully quantum description of the cosmological singularity, and we report on a preliminary study of this.
Estimating the Modified Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, Charles
1995-01-01
A paper at the 1992 FCS showed how to express the modified Allan variance (mvar) in terms of the third difference of the cumulative sum of time residuals. Although this reformulated definition was presented merely as a computational trick for simplifying the calculation of mvar estimates, it has since turned out to be a powerful theoretical tool for deriving the statistical quality of those estimates in terms of their equivalent degrees of freedom (edf), defined for an estimator V by edf V = 2(EV)2/(var V). Confidence intervals for mvar can then be constructed from levels of the appropriate 2 distribution.
Berroya, Renato B.; Escano, Fernando B.
1972-01-01
This report deals with a rare complication of disc-valve prosthesis in the mitral area. A significant disc poppet and struts destruction of mitral Beall valve prostheses occurred 20 and 17 months after implantation. The resulting valve incompetence in the first case contributed to the death of the patient. The durability of Teflon prosthetic valves appears to be in question and this type of valve probably will be unacceptable if there is an increasing number of disc-valve variance in the future. Images PMID:5017573
Measurement of Allan variance and phase noise at fractions of a millihertz
NASA Technical Reports Server (NTRS)
Conroy, Bruce L.; Le, Duc
1990-01-01
Although the measurement of Allan variance of oscillators is well documented, there is a need for a simplified system for finding the degradation of phase noise and Allan variance step-by-step through a system. This article describes an instrumentation system for simultaneous measurement of additive phase noise and degradation in Allan variance through a transmitter system. Also included are measurements of a 20-kW X-band transmitter showing the effect of adding a pass tube regulator.
A Wavelet Perspective on the Allan Variance.
Percival, Donald B
2016-04-01
The origins of the Allan variance trace back 50 years ago to two seminal papers, one by Allan (1966) and the other by Barnes (1966). Since then, the Allan variance has played a leading role in the characterization of high-performance time and frequency standards. Wavelets first arose in the early 1980s in the geophysical literature, and the discrete wavelet transform (DWT) became prominent in the late 1980s in the signal processing literature. Flandrin (1992) briefly documented a connection between the Allan variance and a wavelet transform based upon the Haar wavelet. Percival and Guttorp (1994) noted that one popular estimator of the Allan variance-the maximal overlap estimator-can be interpreted in terms of a version of the DWT now widely referred to as the maximal overlap DWT (MODWT). In particular, when the MODWT is based on the Haar wavelet, the variance of the resulting wavelet coefficients-the wavelet variance-is identical to the Allan variance when the latter is multiplied by one-half. The theory behind the wavelet variance can thus deepen our understanding of the Allan variance. In this paper, we review basic wavelet variance theory with an emphasis on the Haar-based wavelet variance and its connection to the Allan variance. We then note that estimation theory for the wavelet variance offers a means of constructing asymptotically correct confidence intervals (CIs) for the Allan variance without reverting to the common practice of specifying a power-law noise type a priori. We also review recent work on specialized estimators of the wavelet variance that are of interest when some observations are missing (gappy data) or in the presence of contamination (rogue observations or outliers). It is a simple matter to adapt these estimators to become estimators of the Allan variance. Finally we note that wavelet variances based upon wavelets other than the Haar offer interesting generalizations of the Allan variance.
Partitioning Predicted Variance into Constituent Parts: A Primer on Regression Commonality Analysis.
ERIC Educational Resources Information Center
Amado, Alfred J.
Commonality analysis is a method of decomposing the R squared in a multiple regression analysis into the proportion of explained variance of the dependent variable associated with each independent variable uniquely and the proportion of explained variance associated with the common effects of one or more independent variables in various…
Warped functional analysis of variance.
Gervini, Daniel; Carter, Patrick A
2014-09-01
This article presents an Analysis of Variance model for functional data that explicitly incorporates phase variability through a time-warping component, allowing for a unified approach to estimation and inference in presence of amplitude and time variability. The focus is on single-random-factor models but the approach can be easily generalized to more complex ANOVA models. The behavior of the estimators is studied by simulation, and an application to the analysis of growth curves of flour beetles is presented. Although the model assumes a smooth latent process behind the observed trajectories, smootheness of the observed data is not required; the method can be applied to irregular time grids, which are common in longitudinal studies.
ERIC Educational Resources Information Center
Orsini, Larry L.; Hudack, Lawrence R.; Zekan, Donald L.
1999-01-01
The value-added statement (VAS), relatively unknown in the United States, is used in financial reports by many European companies. Saint Bonaventure University (New York) has adapted a VAS to make it appropriate for not-for-profit universities by identifying stakeholder groups (students, faculty, administrators/support personnel, creditors, the…
Practice reduces task relevant variance modulation and forms nominal trajectory
NASA Astrophysics Data System (ADS)
Osu, Rieko; Morishige, Ken-Ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo
2015-12-01
Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary.
Speed Variance and Its Influence on Accidents.
ERIC Educational Resources Information Center
Garber, Nicholas J.; Gadirau, Ravi
A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…
Increasing selection response by Bayesian modeling of heterogeneous environmental variances
Technology Transfer Automated Retrieval System (TEKTRAN)
Heterogeneity of environmental variance among genotypes reduces selection response because genotypes with higher variance are more likely to be selected than low-variance genotypes. Modeling heterogeneous variances to obtain weighted means corrected for heterogeneous variances is difficult in likel...
Restricted sample variance reduces generalizability.
Lakes, Kimberley D
2013-06-01
One factor that affects the reliability of observed scores is restriction of range on the construct measured for a particular group of study participants. This study illustrates how researchers can use generalizability theory to evaluate the impact of restriction of range in particular sample characteristics on the generalizability of test scores and to estimate how changes in measurement design could improve the generalizability of the test scores. An observer-rated measure of child self-regulation (Response to Challenge Scale; Lakes, 2011) is used to examine scores for 198 children (Grades K through 5) within the generalizability theory (GT) framework. The generalizability of ratings within relatively developmentally homogeneous samples is examined and illustrates the effect of reduced variance among ratees on generalizability. Forecasts for g coefficients of various D study designs demonstrate how higher generalizability could be achieved by increasing the number of raters or items. In summary, the research presented illustrates the importance of and procedures for evaluating the generalizability of a set of scores in a particular research context.
Generalized analysis of molecular variance.
Nievergelt, Caroline M; Libiger, Ondrej; Schork, Nicholas J
2007-04-06
Many studies in the fields of genetic epidemiology and applied population genetics are predicated on, or require, an assessment of the genetic background diversity of the individuals chosen for study. A number of strategies have been developed for assessing genetic background diversity. These strategies typically focus on genotype data collected on the individuals in the study, based on a panel of DNA markers. However, many of these strategies are either rooted in cluster analysis techniques, and hence suffer from problems inherent to the assignment of the biological and statistical meaning to resulting clusters, or have formulations that do not permit easy and intuitive extensions. We describe a very general approach to the problem of assessing genetic background diversity that extends the analysis of molecular variance (AMOVA) strategy introduced by Excoffier and colleagues some time ago. As in the original AMOVA strategy, the proposed approach, termed generalized AMOVA (GAMOVA), requires a genetic similarity matrix constructed from the allelic profiles of individuals under study and/or allele frequency summaries of the populations from which the individuals have been sampled. The proposed strategy can be used to either estimate the fraction of genetic variation explained by grouping factors such as country of origin, race, or ethnicity, or to quantify the strength of the relationship of the observed genetic background variation to quantitative measures collected on the subjects, such as blood pressure levels or anthropometric measures. Since the formulation of our test statistic is rooted in multivariate linear models, sets of variables can be related to genetic background in multiple regression-like contexts. GAMOVA can also be used to complement graphical representations of genetic diversity such as tree diagrams (dendrograms) or heatmaps. We examine features, advantages, and power of the proposed procedure and showcase its flexibility by using it to analyze a
Enhancing area of review capabilities: Implementing a variance program
De Leon, F.
1995-12-01
The Railroad Commission of Texas (RRC) has regulated oil-field injection well operations since issuing its first injection permit in 1938. The Environmental Protection Agency (EPA) granted the RRC primary enforcement responsibility for the Class H Underground Injection Control (UIC) Program in April 1982. At that time, the added level of groundwater protection afforded by an Area of Review (AOR) on previously permitted Class H wells was not deemed necessary or cost effective. A proposed EPA rule change will require AORs to be performed on all pre-primacy Class II wells unless a variance can be justified. A variance methodology has been developed by researchers at the University of Missouri-Rolla in conjunction with the American Petroleum Institute (API). This paper will outline the RRC approach to implementing the AOR variance methodology. The RRC`s UIC program tracks 49,256 pre-primacy wells. Approximately 25,598 of these wells have active permits and will be subject to the proposed AOR requirements. The potential workload of performing AORs or granting variances for this many wells makes the development of a Geographic Information System (GIS) imperative. The RRC has recently completed a digitized map of the entire state and has spotted 890,000 of an estimated 1.2 million wells. Integrating this digital state map into a GIS will allow the RRC to tie its many data systems together. Once in place, this integrated data system will be used to evaluate AOR variances for pre-primacy wells on a field-wide basis. It will also reduce the regulatory cost of permitting by allowing the RRC staff to perform AORs or grant variances for the approximately 3,000 new and amended permit applications requiring AORs each year.
Code of Federal Regulations, 2011 CFR
2011-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2012 CFR
2012-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2014 CFR
2014-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2013 CFR
2013-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2010 CFR
2010-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Infinite variance in fermion quantum Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.
Infinite variance in fermion quantum Monte Carlo calculations.
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.
Perspective projection for variance pose face recognition from camera calibration
NASA Astrophysics Data System (ADS)
Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.
2016-04-01
Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.
Variance Design and Air Pollution Control
ERIC Educational Resources Information Center
Ferrar, Terry A.; Brownstein, Alan B.
1975-01-01
Air pollution control authorities were forced to relax air quality standards during the winter of 1972 by granting variances. This paper examines the institutional characteristics of these variance policies from an economic incentive standpoint, sets up desirable structural criteria for institutional design and arrives at policy guidelines for…
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Variances. 1022.16 Section 1022.16 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Variances. 1022.16 Section 1022.16 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Variances. 1022.16 Section 1022.16 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Variances. 1022.16 Section 1022.16 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 4 2013-01-01 2013-01-01 false Variances. 1022.16 Section 1022.16 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
40 CFR 142.41 - Variance request.
Code of Federal Regulations, 2010 CFR
2010-07-01
....41 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.41 Variance request. A supplier of water may request the granting of...
40 CFR 142.41 - Variance request.
Code of Federal Regulations, 2011 CFR
2011-07-01
....41 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.41 Variance request. A supplier of water may request the granting of...
40 CFR 142.41 - Variance request.
Code of Federal Regulations, 2012 CFR
2012-07-01
....41 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.41 Variance request. A supplier of water may request the granting of...
Nonlinear Epigenetic Variance: Review and Simulations
ERIC Educational Resources Information Center
Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.
2010-01-01
We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…
Portfolio optimization with mean-variance model
NASA Astrophysics Data System (ADS)
Hoe, Lam Weng; Siew, Lam Weng
2016-06-01
Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.
Alzheimer's disease: Unique markers for diagnosis & new treatment modalities.
Aggarwal, Neelum T; Shah, Raj C; Bennett, David A
2015-10-01
Alzheimer's disease (AD) is a progressive and irreversible neurodegenerative disease. In humans, AD becomes symptomatic only after brain changes occur over years or decades. Three contiguous phases of AD have been proposed: (i) the AD pathophysiologic process, (ii) mild cognitive impairment due to AD, and (iii) AD dementia. Intensive research continues around the world on unique diagnostic markers and interventions associated with each phase of AD. In this review, we summarize the available evidence and new therapeutic approaches that target both amyloid and tau pathology in AD and discuss the biomarkers and pharmaceutical interventions available and in development for each AD phase.
Creativity and technical innovation: spatial ability's unique role.
Kell, Harrison J; Lubinski, David; Benbow, Camilla P; Steiger, James H
2013-09-01
In the late 1970s, 563 intellectually talented 13-year-olds (identified by the SAT as in the top 0.5% of ability) were assessed on spatial ability. More than 30 years later, the present study evaluated whether spatial ability provided incremental validity (beyond the SAT's mathematical and verbal reasoning subtests) for differentially predicting which of these individuals had patents and three classes of refereed publications. A two-step discriminant-function analysis revealed that the SAT subtests jointly accounted for 10.8% of the variance among these outcomes (p < .01); when spatial ability was added, an additional 7.6% was accounted for--a statistically significant increase (p < .01). The findings indicate that spatial ability has a unique role in the development of creativity, beyond the roles played by the abilities traditionally measured in educational selection, counseling, and industrial-organizational psychology. Spatial ability plays a key and unique role in structuring many important psychological phenomena and should be examined more broadly across the applied and basic psychological sciences.
This document provides assistance to those seeking to submit a variance request for LDR treatability variances and determinations of equivalent treatment regarding the hazardous waste land disposal restrictions program.
Portfolio optimization using median-variance approach
NASA Astrophysics Data System (ADS)
Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli
2013-04-01
Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.
Neural field theory with variance dynamics.
Robinson, P A
2013-06-01
Previous neural field models have mostly been concerned with prediction of mean neural activity and with second order quantities such as its variance, but without feedback of second order quantities on the dynamics. Here the effects of feedback of the variance on the steady states and adiabatic dynamics of neural systems are calculated using linear neural field theory to estimate the neural voltage variance, then including this quantity in the total variance parameter of the nonlinear firing rate-voltage response function, and thus into determination of the fixed points and the variance itself. The general results further clarify the limits of validity of approaches with and without inclusion of variance dynamics. Specific applications show that stability against a saddle-node bifurcation is reduced in a purely cortical system, but can be either increased or decreased in the corticothalamic case, depending on the initial state. Estimates of critical variance scalings near saddle-node bifurcation are also found, including physiologically based normalizations and new scalings for mean firing rate and the position of the bifurcation.
Variance estimation for stratified propensity score estimators.
Williamson, E J; Morley, R; Lucas, A; Carpenter, J R
2012-07-10
Propensity score methods are increasingly used to estimate the effect of a treatment or exposure on an outcome in non-randomised studies. We focus on one such method, stratification on the propensity score, comparing it with the method of inverse-probability weighting by the propensity score. The propensity score--the conditional probability of receiving the treatment given observed covariates--is usually an unknown probability estimated from the data. Estimators for the variance of treatment effect estimates typically used in practice, however, do not take into account that the propensity score itself has been estimated from the data. By deriving the asymptotic marginal variance of the stratified estimate of treatment effect, correctly taking into account the estimation of the propensity score, we show that routinely used variance estimators are likely to produce confidence intervals that are too conservative when the propensity score model includes variables that predict (cause) the outcome, but only weakly predict the treatment. In contrast, a comparison with the analogous marginal variance for the inverse probability weighted (IPW) estimator shows that routinely used variance estimators for the IPW estimator are likely to produce confidence intervals that are almost always too conservative. Because exact calculation of the asymptotic marginal variance is likely to be complex, particularly for the stratified estimator, we suggest that bootstrap estimates of variance should be used in practice.
Commonality Analysis: A Method of Analyzing Unique and Common Variance Proportions.
ERIC Educational Resources Information Center
Kroff, Michael W.
This paper considers the use of commonality analysis as an effective tool for analyzing relationships between variables in multiple regression or canonical correlational analysis (CCA). The merits of commonality analysis are discussed and the procedure for running commonality analysis is summarized as a four-step process. A heuristic example is…
Reducing variance in batch partitioning measurements
Mariner, Paul E.
2010-08-11
The partitioning experiment is commonly performed with little or no attention to reducing measurement variance. Batch test procedures such as those used to measure K{sub d} values (e.g., ASTM D 4646 and EPA402 -R-99-004A) do not explain how to evaluate measurement uncertainty nor how to minimize measurement variance. In fact, ASTM D 4646 prescribes a sorbent:water ratio that prevents variance minimization. Consequently, the variance of a set of partitioning measurements can be extreme and even absurd. Such data sets, which are commonplace, hamper probabilistic modeling efforts. An error-savvy design requires adjustment of the solution:sorbent ratio so that approximately half of the sorbate partitions to the sorbent. Results of Monte Carlo simulations indicate that this simple step can markedly improve the precision and statistical characterization of partitioning uncertainty.
78 FR 14122 - Revocation of Permanent Variances
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-04
... Occupational Safety and Health Administration Revocation of Permanent Variances AGENCY: Occupational Safety and Health Administration (OSHA), Labor. ACTION: Notice of revocation. SUMMARY: With this notice, OSHA is... into consideration these newly corrected cross references. DATES: The effective date of the...
Code of Federal Regulations, 2014 CFR
2014-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2013 CFR
2013-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2012 CFR
2012-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2010 CFR
2010-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2011 CFR
2011-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Phonocardiographic diagnosis of aortic ball variance.
Hylen, J C; Kloster, F E; Herr, R H; Hull, P Q; Ames, A W; Starr, A; Griswold, H E
1968-07-01
Fatty infiltration causing changes in the silastic poppet of the Model 1000 series Starr-Edwards aortic valve prostheses (ball variance) has been detected with increasing frequency and can result in sudden death. Phonocardiograms were recorded on 12 patients with ball variance confirmed by operation and of 31 controls. Ten of the 12 patients with ball variance were distinguished from the controls by an aortic opening sound (AO) less than half as intense as the aortic closure sound (AC) at the second right intercostal space (AO/AC ratio less than 0.5). Both AO and AC were decreased in two patients with ball variance, with the loss of the characteristic high frequency and amplitude of these sounds. The only patient having a diminished AO/AC ratio (0.42) without ball variance at reoperation had a clot extending over the aortic valve struts. The phonocardiographic findings have been the most reliable objective evidence of ball variance in patients with Starr-Edwards aortic prosthesis of the Model 1000 series.
Orientifolded locally AdS3 geometries
NASA Astrophysics Data System (ADS)
Loran, F.; Sheikh-Jabbari, M. M.
2011-01-01
Continuing the analysis of [Loran F and Sheikh-Jabbari M M 2010 Phys. Lett. B 693 184-7], we classify all locally AdS3 stationary axi-symmetric unorientable solutions to AdS3 Einstein gravity and show that they are obtained by applying certain orientifold projection on AdS3, BTZ or AdS3 self-dual orbifold, respectively, O-AdS3, O-BTZ and O-SDO geometries. Depending on the orientifold fixed surface, the O-surface, which is either a space-like 2D plane or a cylinder, or a light-like 2D plane or a cylinder, one can distinguish four distinct cases. For the space-like orientifold plane or cylinder cases, these geometries solve AdS3 Einstein equations and are hence locally AdS3 everywhere except at the O-surface, where there is a delta-function source. For the light-like cases, the geometry is a solution to Einstein equations even at the O-surface. We discuss the causal structure for static, extremal and general rotating O-BTZ and O-SDO cases as well as the geodesic motion on these geometries. We also discuss orientifolding Poincaré patch AdS3 and AdS2 geometries as a way to geodesic completion of these spaces and comment on the 2D CFT dual to the O-geometries.
A uniqueness theorem for the anti-de Sitter soliton.
Galloway, G J; Surya, S; Woolgar, E
2002-03-11
The stability of physical systems depends on the existence of a state of least energy. In gravity, this is guaranteed by the positive energy theorem. For topological reasons, this fails for nonsupersymmetric Kaluza-Klein compactifications, which can decay to arbitrarily negative energy. For related reasons, this also fails for the anti-de Sitter (AdS) soliton, a globally static, asymptotically toroidal Lambda<0 spacetime with negative mass. Nonetheless, arguing from the AdS conformal field theory (AdS/CFT) correspondence, Horowitz and Myers proposed a new positive energy conjecture, which asserts that the AdS soliton is the unique state of least energy in its asymptotic class. We give a new structure theorem for static Lambda<0 spacetimes and use it to prove uniqueness of the AdS soliton. Our results offer significant support for the new positive energy conjecture and add to the body of rigorous results inspired by the AdS/CFT correspondence.
On uniqueness of charged Kerr AdS black holes in five dimensions
NASA Astrophysics Data System (ADS)
Madden, Owen; Ross, Simon F.
2005-02-01
We show that the solutions describing charged rotating black holes in five-dimensional-gauged supergravities found recently by Cvetic, Lü and Pope (2004 Charged Kerr de Sitter black holes in five dimensions Phys. Lett. B 598 273, 2004 Charged rotating black holes in five dimensional U(1)3 gauged N = 2 supergravity Phys. Rev. D 70 081502) are completely specified by the mass, charges and angular momentum. The additional parameter appearing in these solutions is removed by a coordinate transformation and redefinition of parameters. Thus, the apparent hair in these solutions is unphysical.
Static Einstein-Maxwell Black Holes with No Spatial Isometries in AdS Space.
Herdeiro, Carlos A R; Radu, Eugen
2016-11-25
We explicitly construct static black hole solutions to the fully nonlinear, D=4, Einstein-Maxwell-anti-de Sitter (AdS) equations that have no continuous spatial symmetries. These black holes have a smooth, topologically spherical horizon (section), but without isometries, and approach, asymptotically, global AdS spacetime. They are interpreted as bound states of a horizon with the Einstein-Maxwell-AdS solitons recently discovered, for appropriate boundary data. In sharp contrast to the uniqueness results for a Minkowski electrovacuum, the existence of these black holes shows that single, equilibrium, black hole solutions in an AdS electrovacuum admit an arbitrary multipole structure.
Discrimination of frequency variance for tonal sequences.
Byrne, Andrew J; Viemeister, Neal F; Stellmack, Mark A
2014-12-01
Real-world auditory stimuli are highly variable across occurrences and sources. The present study examined the sensitivity of human listeners to differences in global stimulus variability. In a two-interval, forced-choice task, variance discrimination was measured using sequences of five 100-ms tone pulses. The frequency of each pulse was sampled randomly from a distribution that was Gaussian in logarithmic frequency. In the non-signal interval, the sampled distribution had a variance of σSTAN (2), while in the signal interval, the variance of the sequence was σSIG (2) (with σSIG (2) > σSTAN (2)). The listener's task was to choose the interval with the larger variance. To constrain possible decision strategies, the mean frequency of the sampling distribution of each interval was randomly chosen for each presentation. Psychometric functions were measured for various values of σSTAN (2). Although the performance was remarkably similar across listeners, overall performance was poorer than that of an ideal observer (IO) which perfectly compares interval variances. However, like the IO, Weber's Law behavior was observed, with a constant ratio of ( σSIG (2)- σSTAN (2)) to σSTAN (2) yielding similar performance. A model which degraded the IO with a frequency-resolution noise and a computational noise provided a reasonable fit to the real data.
Variance Decomposition Using an IRT Measurement Model
Glas, Cees A. W.; Boomsma, Dorret I.
2007-01-01
Large scale research projects in behaviour genetics and genetic epidemiology are often based on questionnaire or interview data. Typically, a number of items is presented to a number of subjects, the subjects’ sum scores on the items are computed, and the variance of sum scores is decomposed into a number of variance components. This paper discusses several disadvantages of the approach of analysing sum scores, such as the attenuation of correlations amongst sum scores due to their unreliability. It is shown that the framework of Item Response Theory (IRT) offers a solution to most of these problems. We argue that an IRT approach in combination with Markov chain Monte Carlo (MCMC) estimation provides a flexible and efficient framework for modelling behavioural phenotypes. Next, we use data simulation to illustrate the potentially huge bias in estimating variance components on the basis of sum scores. We then apply the IRT approach with an analysis of attention problems in young adult twins where the variance decomposition model is extended with an IRT measurement model. We show that when estimating an IRT measurement model and a variance decomposition model simultaneously, the estimate for the heritability of attention problems increases from 40% (based on sum scores) to 73%. PMID:17534709
Variance estimation for nucleotide substitution models.
Chen, Weishan; Wang, Hsiuying
2015-09-01
The current variance estimators for most evolutionary models were derived when a nucleotide substitution number estimator was approximated with a simple first order Taylor expansion. In this study, we derive three variance estimators for the F81, F84, HKY85 and TN93 nucleotide substitution models, respectively. They are obtained using the second order Taylor expansion of the substitution number estimator, the first order Taylor expansion of a squared deviation and the second order Taylor expansion of a squared deviation, respectively. These variance estimators are compared with the existing variance estimator in terms of a simulation study. It shows that the variance estimator, which is derived using the second order Taylor expansion of a squared deviation, is more accurate than the other three estimators. In addition, we also compare these estimators with an estimator derived by the bootstrap method. The simulation shows that the performance of this bootstrap estimator is similar to the estimator derived by the second order Taylor expansion of a squared deviation. Since the latter one has an explicit form, it is more efficient than the bootstrap estimator.
NASA Astrophysics Data System (ADS)
Anninos, Dionysios; Li, Wei; Padi, Megha; Song, Wei; Strominger, Andrew
2009-03-01
Three dimensional topologically massive gravity (TMG) with a negative cosmological constant -l-2 and positive Newton constant G admits an AdS3 vacuum solution for any value of the graviton mass μ. These are all known to be perturbatively unstable except at the recently explored chiral point μl = 1. However we show herein that for every value of μl ≠ 3 there are two other (potentially stable) vacuum solutions given by SL(2,Bbb R) × U(1)-invariant warped AdS3 geometries, with a timelike or spacelike U(1) isometry. Critical behavior occurs at μl = 3, where the warping transitions from a stretching to a squashing, and there are a pair of warped solutions with a null U(1) isometry. For μl > 3, there are known warped black hole solutions which are asymptotic to warped AdS3. We show that these black holes are discrete quotients of warped AdS3 just as BTZ black holes are discrete quotients of ordinary AdS3. Moreover new solutions of this type, relevant to any theory with warped AdS3 solutions, are exhibited. Finally we note that the black hole thermodynamics is consistent with the hypothesis that, for μl > 3, the warped AdS3 ground state of TMG is holographically dual to a 2D boundary CFT with central charges c_R-formula and c_L-formula.
NASA Astrophysics Data System (ADS)
Song, Wei; Anninos, Dionysios; Li, Wei; Padi, Megha; Strominger, Andrew
2009-03-01
Three dimensional topologically massive gravity (TMG) with a negative cosmological constant -ell-2 and positive Newton constant G admits an AdS3 vacuum solution for any value of the graviton mass μ. These are all known to be perturbatively unstable except at the recently explored chiral point μell = 1. However we show herein that for every value of μell ≠ 3 there are two other (potentially stable) vacuum solutions given by SL(2,Bbb R) × U(1)-invariant warped AdS3 geometries, with a timelike or spacelike U(1) isometry. Critical behavior occurs at μell = 3, where the warping transitions from a stretching to a squashing, and there are a pair of warped solutions with a null U(1) isometry. For μell > 3, there are known warped black hole solutions which are asymptotic to warped AdS3. We show that these black holes are discrete quotients of warped AdS3 just as BTZ black holes are discrete quotients of ordinary AdS3. Moreover new solutions of this type, relevant to any theory with warped AdS3 solutions, are exhibited. Finally we note that the black hole thermodynamics is consistent with the hypothesis that, for μell > 3, the warped AdS3 ground state of TMG is holographically dual to a 2D boundary CFT with central charges c_R-formula and c_L-formula.
Integrating Variances into an Analytical Database
NASA Technical Reports Server (NTRS)
Sanchez, Carlos
2010-01-01
For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.
Variance in binary stellar population synthesis
NASA Astrophysics Data System (ADS)
Breivik, Katelyn; Larson, Shane L.
2016-03-01
In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.
A Simple Algorithm for Approximating Confidence on the Modified Allan Variance and the Time Variance
NASA Technical Reports Server (NTRS)
Weiss, Marc A.; Greenhall, Charles A.
1996-01-01
An approximating algorithm for computing equvalent degrees of freedom of the Modified Allan Variance and its square root, the Modified Allan Deviation (MVAR and MDEV), and the Time Variance and Time Deviation (TVAR and TDEV) is presented, along with an algorithm for approximating the inverse chi-square distribution.
NASA Astrophysics Data System (ADS)
Callebaut, Nele; Gubser, Steven S.; Samberg, Andreas; Toldo, Chiara
2015-11-01
We study segmented strings in flat space and in AdS 3. In flat space, these well known classical motions describe strings which at any instant of time are piecewise linear. In AdS 3, the worldsheet is composed of faces each of which is a region bounded by null geodesics in an AdS 2 subspace of AdS 3. The time evolution can be described by specifying the null geodesic motion of kinks in the string at which two segments are joined. The outcome of collisions of kinks on the worldsheet can be worked out essentially using considerations of causality. We study several examples of closed segmented strings in AdS 3 and find an unexpected quasi-periodic behavior. We also work out a WKB analysis of quantum states of yo-yo strings in AdS 5 and find a logarithmic term reminiscent of the logarithmic twist of string states on the leading Regge trajectory.
Variance of Dispersion Coefficients in Heterogeneous Porous Media
NASA Astrophysics Data System (ADS)
Dentz, Marco; De Barros, Felipe P. J.
2013-04-01
We study the dispersion of a passive solute in heterogeneous porous media using a stochastic modeling approach. Heterogeneity on one hand leads to an increase of solute spreading, which is described by the well-known macrodispersion phenomenon. On the other hand, it induces uncertainty about the dispersion behavior, which is quantified by ensemble averages over suitably defined dispersion coefficients in single medium realizations. We focus here on the sample to sample fluctuations of dispersion coefficients about their ensemble mean values for solutes evolving from point-like and extended source distributions in d = 2 and d = 3 spatial dimensions. The definition of dispersion coefficients in single medium realizations for finite source sizes is not unique, unlike for point-like sources. Thus, we first discuss a series of dispersion measures, which describe the extension of the solute plume, as well as dispersion measures that quantify the solute dispersion relative to the injection point. The sample to sample fluctuations of these observables are quantified in terms of the variance with respect to their ensemble averages. We find that the ensemble averages of these dispersion measures may be identical, their fluctuation behavior, however, may be very different. This is quantified using perturbation expansions in the fluctuations of the random flow field. We derive explicit expressions for the time evolution of the variance of the dispersion coefficients. The characteristic time scale for the variance evolution is given by the typical dispersion time over the characteristic heterogeneity scale and the dimensions of the source. We find that the dispersion variances asymptotically decrease to zero in d = 3 dimensions, which means, the dispersion coefficients are self-averaging observables, at least for moderate heterogeneity. In d = 2 dimensions, the variance converges towards a finite asymptotic value that is independent of the source distribution. Dispersion is not
Testing Interaction Effects without Discarding Variance.
ERIC Educational Resources Information Center
Lopez, Kay A.
Analysis of variance (ANOVA) and multiple regression are two of the most commonly used methods of data analysis in behavioral science research. Although ANOVA was intended for use with experimental designs, educational researchers have used ANOVA extensively in aptitude-treatment interaction (ATI) research. This practice tends to make researchers…
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 7 2010-07-01 2010-07-01 false Variances. 1920.2 Section 1920.2 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR (CONTINUED) PROCEDURE FOR VARIATIONS FROM SAFETY AND HEALTH REGULATIONS UNDER THE LONGSHOREMEN'S AND HARBOR...
Code of Federal Regulations, 2011 CFR
2011-04-01
... Dockets Management, except for information regarded as confidential under section 537(e) of the act. (d... Management (HFA-305), Food and Drug Administration, 5630 Fishers Lane, rm. 1061, Rockville, MD 20852. (1) The application for variance shall include the following information: (i) A description of the product and...
Formative Use of Intuitive Analysis of Variance
ERIC Educational Resources Information Center
Trumpower, David L.
2013-01-01
Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In both…
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Variances. 1021.343 Section 1021.343 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) NATIONAL ENVIRONMENTAL POLICY ACT IMPLEMENTING PROCEDURES Implementing... arrangements for emergency actions having significant environmental impacts. DOE shall document,...
Code of Federal Regulations, 2010 CFR
2010-04-01
... the study was conducted in compliance with the good laboratory practice regulations set forth in part... application for variance shall include the following information: (i) A description of the product and its... equipment, the proposed location of each unit. (viii) Such other information required by regulation or...
Testing for homogeneity of variance in time series: Long memory, wavelets, and the Nile River
NASA Astrophysics Data System (ADS)
Whitcher, B.; Byers, S. D.; Guttorp, P.; Percival, D. B.
2002-05-01
We consider the problem of testing for homogeneity of variance in a time series with long memory structure. We demonstrate that a test whose null hypothesis is designed to be white noise can, in fact, be applied, on a scale by scale basis, to the discrete wavelet transform of long memory processes. In particular, we show that evaluating a normalized cumulative sum of squares test statistic using critical levels for the null hypothesis of white noise yields approximately the same null hypothesis rejection rates when applied to the discrete wavelet transform of samples from a fractionally differenced process. The point at which the test statistic, using a nondecimated version of the discrete wavelet transform, achieves its maximum value can be used to estimate the time of the unknown variance change. We apply our proposed test statistic on five time series derived from the historical record of Nile River yearly minimum water levels covering 622-1922 A.D., each series exhibiting various degrees of serial correlation including long memory. In the longest subseries, spanning 622-1284 A.D., the test confirms an inhomogeneity of variance at short time scales and identifies the change point around 720 A.D., which coincides closely with the construction of a new device around 715 A.D. for measuring the Nile River. The test also detects a change in variance for a record of only 36 years.
Parameterization of Incident and Infragravity Swash Variance
NASA Astrophysics Data System (ADS)
Stockdon, H. F.; Holman, R. A.; Sallenger, A. H.
2002-12-01
By clearly defining the forcing and morphologic controls of swash variance in both the incident and infragravity frequency bands, we are able to derive a more complete parameterization for extreme runup that may be applicable to a wide range of beach and wave conditions. It is expected that the dynamics of the incident and infragravity bands will have different dependencies on offshore wave conditions and local beach slopes. For example, previous studies have shown that swash variance in the incident band depends on foreshore beach slope while the infragravity variance depends more on a weighted mean slope across the surf zone. Because the physics of each band is parameterized differently, the amount that each frequency band contributes to the total swash variance will vary from site to site and, often, at a single site as the profile configuration changes over time. Using water level time series (measured at the shoreline) collected during nine dynamically different field experiments, we test the expected behavior of both incident and infragravity swash and the contribution each makes to total variance. At the dissipative sites (Iribarren number, \\xi0, <0.3) located in Oregon and the Netherlands, the incident band swash is saturated with respect to offshore wave height. Conversely, on the intermediate and reflective beaches, the amplitudes of both incident and infragravity swash variance grow with increasing offshore wave height. While infragravity band swash at all sites appears to increase linearly with offshore wave height, the magnitudes of the response are somewhat greater on reflective beaches than on dissipative beaches. This means that for the same offshore wave conditions the swash on a steeper foreshore will be larger than that on a more gently sloping foreshore. The potential control of the surf zone slope on infragravity band swash is examined at Duck, North Carolina, (0.3 < \\xi0 < 4.0), where significant differences in the relationship between swash
42 CFR 456.525 - Request for renewal of variance.
Code of Federal Regulations, 2010 CFR
2010-10-01
... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from...
42 CFR 456.525 - Request for renewal of variance.
Code of Federal Regulations, 2011 CFR
2011-10-01
... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from...
42 CFR 456.521 - Conditions for granting variance requests.
Code of Federal Regulations, 2010 CFR
2010-10-01
... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from...
42 CFR 456.521 - Conditions for granting variance requests.
Code of Federal Regulations, 2011 CFR
2011-10-01
... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from...
GR uniqueness and deformations
NASA Astrophysics Data System (ADS)
Krasnov, Kirill
2015-10-01
In the metric formulation gravitons are described with the parity symmetric S + 2 ⊗ S - 2 representation of Lorentz group. General Relativity is then the unique theory of interacting gravitons with second order field equations. We show that if a chiral S + 3 ⊗ S - representation is used instead, the uniqueness is lost, and there is an infinite-parametric family of theories of interacting gravitons with second order field equations. We use the language of graviton scattering amplitudes, and show how the uniqueness of GR is avoided using simple dimensional analysis. The resulting distinct from GR gravity theories are all parity asymmetric, but share the GR MHV amplitudes. They have new all same helicity graviton scattering amplitudes at every graviton order. The amplitudes with at least one graviton of opposite helicity continue to be determinable by the BCFW recursion.
Replica approach to mean-variance portfolio optimization
NASA Astrophysics Data System (ADS)
Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre
2016-12-01
We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r = N/T < 1, where N is the dimension of the portfolio and T the length of the time series used to estimate the covariance matrix. At the critical point r = 1 a phase transition is taking place. The out of sample estimation error blows up at this point as 1/(1 - r), independently of the covariance matrix or the expected return, displaying the universality not only of the critical exponent, but also the critical point. As a conspicuous illustration of the dangers of in-sample estimates, the optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.
Analysis of variance of microarray data.
Ayroles, Julien F; Gibson, Greg
2006-01-01
Analysis of variance (ANOVA) is an approach used to identify differentially expressed genes in complex experimental designs. It is based on testing for the significance of the magnitude of effect of two or more treatments taking into account the variance within and between treatment classes. ANOVA is a highly flexible analytical approach that allows investigators to simultaneously assess the contributions of multiple factors to gene expression variation, including technical (dye, batch) effects and biological (sex, genotype, drug, time) ones, as well as interactions between factors. This chapter provides an overview of the theory of linear mixture modeling and the sequence of steps involved in fitting gene-specific models and discusses essential features of experimental design. Commercial and open-source software for performing ANOVA is widely available.
Analysis of Variance of Multiply Imputed Data.
van Ginkel, Joost R; Kroonenberg, Pieter M
2014-01-01
As a procedure for handling missing data, Multiple imputation consists of estimating the missing data multiple times to create several complete versions of an incomplete data set. All these data sets are analyzed by the same statistical procedure, and the results are pooled for interpretation. So far, no explicit rules for pooling F-tests of (repeated-measures) analysis of variance have been defined. In this paper we outline the appropriate procedure for the results of analysis of variance for multiply imputed data sets. It involves both reformulation of the ANOVA model as a regression model using effect coding of the predictors and applying already existing combination rules for regression models. The proposed procedure is illustrated using three example data sets. The pooled results of these three examples provide plausible F- and p-values.
ERIC Educational Resources Information Center
Goble, Don
2009-01-01
This article describes the many learning opportunities that broadcast technology students at Ladue Horton Watkins High School in St. Louis, Missouri, experience because of their unique access to technology and methods of learning. Through scaffolding, stepladder techniques, and trial by fire, students learn to produce multiple television programs,…
Estimating Modifying Effect of Age on Genetic and Environmental Variance Components in Twin Models
He, Liang; Sillanpää, Mikko J.; Silventoinen, Karri; Kaprio, Jaakko; Pitkäniemi, Janne
2016-01-01
Twin studies have been adopted for decades to disentangle the relative genetic and environmental contributions for a wide range of traits. However, heritability estimation based on the classical twin models does not take into account dynamic behavior of the variance components over age. Varying variance of the genetic component over age can imply the existence of gene–environment (G × E) interactions that general genome-wide association studies (GWAS) fail to capture, which may lead to the inconsistency of heritability estimates between twin design and GWAS. Existing parametric G × E interaction models for twin studies are limited by assuming a linear or quadratic form of the variance curves with respect to a moderator that can, however, be overly restricted in reality. Here we propose spline-based approaches to explore the variance curves of the genetic and environmental components. We choose the additive genetic, common, and unique environmental variance components (ACE) model as the starting point. We treat the component variances as variance functions with respect to age modeled by B-splines or P-splines. We develop an empirical Bayes method to estimate the variance curves together with their confidence bands and provide an R package for public use. Our simulations demonstrate that the proposed methods accurately capture dynamic behavior of the component variances in terms of mean square errors with a data set of >10,000 twin pairs. Using the proposed methods as an alternative and major extension to the classical twin models, our analyses with a large-scale Finnish twin data set (19,510 MZ twins and 27,312 DZ same-sex twins) discover that the variances of the A, C, and E components for body mass index (BMI) change substantially across life span in different patterns and the heritability of BMI drops to ∼50% after middle age. The results further indicate that the decline of heritability is due to increasing unique environmental variance, which provides
Directional variance analysis of annual rings
NASA Astrophysics Data System (ADS)
Kumpulainen, P.; Marjanen, K.
2010-07-01
The wood quality measurement methods are of increasing importance in the wood industry. The goal is to produce more high quality products with higher marketing value than is produced today. One of the key factors for increasing the market value is to provide better measurements for increased information to support the decisions made later in the product chain. Strength and stiffness are important properties of the wood. They are related to mean annual ring width and its deviation. These indicators can be estimated from images taken from the log ends by two-dimensional power spectrum analysis. The spectrum analysis has been used successfully for images of pine. However, the annual rings in birch, for example are less distinguishable and the basic spectrum analysis method does not give reliable results. A novel method for local log end variance analysis based on Radon-transform is proposed. The directions and the positions of the annual rings can be estimated from local minimum and maximum variance estimates. Applying the spectrum analysis on the maximum local variance estimate instead of the original image produces more reliable estimate of the annual ring width. The proposed method is not limited to log end analysis only. It is usable in other two-dimensional random signal and texture analysis tasks.
Variance and skewness in the FIRST survey
NASA Astrophysics Data System (ADS)
Magliocchetti, M.; Maddox, S. J.; Lahav, O.; Wall, J. V.
1998-10-01
We investigate the large-scale clustering of radio sources in the FIRST 1.4-GHz survey by analysing the distribution function (counts in cells). We select a reliable sample from the the FIRST catalogue, paying particular attention to the problem of how to define single radio sources from the multiple components listed. We also consider the incompleteness of the catalogue. We estimate the angular two-point correlation function w(theta), the variance Psi_2 and skewness Psi_3 of the distribution for the various subsamples chosen on different criteria. Both w(theta) and Psi_2 show power-law behaviour with an amplitude corresponding to a spatial correlation length of r_0~10h^-1Mpc. We detect significant skewness in the distribution, the first such detection in radio surveys. This skewness is found to be related to the variance through Psi_3=S_3(Psi_2)^alpha, with alpha=1.9+/-0.1, consistent with the non-linear gravitational growth of perturbations from primordial Gaussian initial conditions. We show that the amplitude of variance and the skewness are consistent with realistic models of galaxy clustering.
Hypothesis exploration with visualization of variance
2014-01-01
Background The Consortium for Neuropsychiatric Phenomics (CNP) at UCLA was an investigation into the biological bases of traits such as memory and response inhibition phenotypes—to explore whether they are linked to syndromes including ADHD, Bipolar disorder, and Schizophrenia. An aim of the consortium was in moving from traditional categorical approaches for psychiatric syndromes towards more quantitative approaches based on large-scale analysis of the space of human variation. It represented an application of phenomics—wide-scale, systematic study of phenotypes—to neuropsychiatry research. Results This paper reports on a system for exploration of hypotheses in data obtained from the LA2K, LA3C, and LA5C studies in CNP. ViVA is a system for exploratory data analysis using novel mathematical models and methods for visualization of variance. An example of these methods is called VISOVA, a combination of visualization and analysis of variance, with the flavor of exploration associated with ANOVA in biomedical hypothesis generation. It permits visual identification of phenotype profiles—patterns of values across phenotypes—that characterize groups. Visualization enables screening and refinement of hypotheses about variance structure of sets of phenotypes. Conclusions The ViVA system was designed for exploration of neuropsychiatric hypotheses by interdisciplinary teams. Automated visualization in ViVA supports ‘natural selection’ on a pool of hypotheses, and permits deeper understanding of the statistical architecture of the data. Large-scale perspective of this kind could lead to better neuropsychiatric diagnostics. PMID:25097666
Variance Reduction Factor of Nuclear Data for Integral Neutronics Parameters
Chiba, G. Tsuji, M.; Narabayashi, T.
2015-01-15
We propose a new quantity, a variance reduction factor, to identify nuclear data for which further improvements are required to reduce uncertainties of target integral neutronics parameters. Important energy ranges can be also identified with this variance reduction factor. Variance reduction factors are calculated for several integral neutronics parameters. The usefulness of the variance reduction factors is demonstrated.
Applications of non-parametric statistics and analysis of variance on sample variances
NASA Technical Reports Server (NTRS)
Myers, R. H.
1981-01-01
Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.
Minimum variance and variance of outgoing quality limit MDS-1(c1, c2) plans
NASA Astrophysics Data System (ADS)
Raju, C.; Vidya, R.
2016-06-01
In this article, the outgoing quality (OQ) and total inspection (TI) of multiple deferred state sampling plans MDS-1(c1,c2) are studied. It is assumed that the inspection is rejection rectification. Procedures for designing MDS-1(c1,c2) sampling plans with minimum variance of OQ and TI are developed. A procedure for obtaining a plan for a designated upper limit for the variance of the OQ (VOQL) is outlined.
Event segmentation ability uniquely predicts event memory.
Sargent, Jesse Q; Zacks, Jeffrey M; Hambrick, David Z; Zacks, Rose T; Kurby, Christopher A; Bailey, Heather R; Eisenberg, Michelle L; Beck, Taylor M
2013-11-01
Memory for everyday events plays a central role in tasks of daily living, autobiographical memory, and planning. Event memory depends in part on segmenting ongoing activity into meaningful units. This study examined the relationship between event segmentation and memory in a lifespan sample to answer the following question: Is the ability to segment activity into meaningful events a unique predictor of subsequent memory, or is the relationship between event perception and memory accounted for by general cognitive abilities? Two hundred and eight adults ranging from 20 to 79years old segmented movies of everyday events and attempted to remember the events afterwards. They also completed psychometric ability tests and tests measuring script knowledge for everyday events. Event segmentation and script knowledge both explained unique variance in event memory above and beyond the psychometric measures, and did so as strongly in older as in younger adults. These results suggest that event segmentation is a basic cognitive mechanism, important for memory across the lifespan.
Uniquely human social cognition.
Saxe, Rebecca
2006-04-01
Recent data identify distinct components of social cognition associated with five brain regions. In posterior temporal cortex, the extrastriate body area is associated with perceiving the form of other human bodies. A nearby region in the posterior superior temporal sulcus is involved in interpreting the motions of a human body in terms of goals. A distinct region at the temporo-parietal junction supports the uniquely human ability to reason about the contents of mental states. Medial prefrontal cortex is divided into at least two subregions. Ventral medial prefrontal cortex is implicated in emotional empathy, whereas dorsal medial prefrontal cortex is implicated in the uniquely human representation of triadic relations between two minds and an object, supporting shared attention and collaborative goals.
NASA Astrophysics Data System (ADS)
Morales, Jose F.; Samtleben, Henning
2003-06-01
We review recent work on the holographic duals of type II and heterotic matrix string theories described by warped AdS3 supergravities. In particular, we compute the spectra of Kaluza-Klein primaries for type I, II supergravities on warped AdS3 × S7 and match them with the primary operators in the dual two-dimensional gauge theories. The presence of non-trivial warp factors and dilaton profiles requires a modification of the familiar dictionary between masses and 'scaling' dimensions of fields and operators. We present these modifications for the general case of domain wall/QFT correspondences between supergravities on warped AdSd+1 × Sq geometries and super Yang-Mills theories with 16 supercharges.
Visual SLAM Using Variance Grid Maps
NASA Technical Reports Server (NTRS)
Howard, Andrew B.; Marks, Tim K.
2011-01-01
An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance
The defect variance of random spherical harmonics
NASA Astrophysics Data System (ADS)
Marinucci, Domenico; Wigman, Igor
2011-09-01
The defect of a function f:M\\rightarrow {R} is defined as the difference between the measure of the positive and negative regions. In this paper, we begin the analysis of the distribution of defect of random Gaussian spherical harmonics. By an easy argument, the defect is non-trivial only for even degree and the expected value always vanishes. Our principal result is evaluating the defect variance, asymptotically in the high-frequency limit. As other geometric functionals of random eigenfunctions, the defect may be used as a tool to probe the statistical properties of spherical random fields, a topic of great interest for modern cosmological data analysis.
NASA's unique networking environment
NASA Technical Reports Server (NTRS)
Johnson, Marjory J.
1988-01-01
Networking is an infrastructure technology; it is a tool for NASA to support its space and aeronautics missions. Some of NASA's networking problems are shared by the commercial and/or military communities, and can be solved by working with these communities. However, some of NASA's networking problems are unique and will not be addressed by these other communities. Individual characteristics of NASA's space-mission networking enviroment are examined, the combination of all these characteristics that distinguish NASA's networking systems from either commercial or military systems is explained, and some research areas that are important for NASA to pursue are outlined.
River meanders - Theory of minimum variance
Langbein, Walter Basil; Leopold, Luna Bergere
1966-01-01
Meanders are the result of erosion-deposition processes tending toward the most stable form in which the variability of certain essential properties is minimized. This minimization involves the adjustment of the planimetric geometry and the hydraulic factors of depth, velocity, and local slope.The planimetric geometry of a meander is that of a random walk whose most frequent form minimizes the sum of the squares of the changes in direction in each successive unit length. The direction angles are then sine functions of channel distance. This yields a meander shape typically present in meandering rivers and has the characteristic that the ratio of meander length to average radius of curvature in the bend is 4.7.Depth, velocity, and slope are shown by field observations to be adjusted so as to decrease the variance of shear and the friction factor in a meander curve over that in an otherwise comparable straight reach of the same riverSince theory and observation indicate meanders achieve the minimum variance postulated, it follows that for channels in which alternating pools and riffles occur, meandering is the most probable form of channel geometry and thus is more stable geometry than a straight or nonmeandering alinement.
Variance and Skewness in the FIRST Survey
NASA Astrophysics Data System (ADS)
Magliocchetti, M.; Maddox, S. J.; Lahav, O.; Wall, J. V.
We investigate the large-scale clustering of radio sources by analysing the distribution function of the FIRST 1.4 GHz survey. We select a reliable galaxy sample from the FIRST catalogue, paying particular attention to the definition of single radio sources from the multiple components listed in the FIRST catalogue. We estimate the variance, Ψ2, and skewness, Ψ3, of the distribution function for the best galaxy subsample. Ψ2 shows power-law behaviour as a function of cell size, with an amplitude corresponding a spatial correlation length of r0 ~10 h-1 Mpc. We detect significant skewness in the distribution, and find that it is related to the variance through the relation Ψ3 = S3 (Ψ2)α with α = 1.9 +/- 0.1 consistent with the non-linear growth of perturbations from primordial Gaussian initial conditions. We show that the amplitude of clustering (corresponding to a spatial correlation length of r0 ~10 h-1 Mpc) and skewness are consistent with realistic models of galaxy clustering.
Multivariate Granger causality and generalized variance
NASA Astrophysics Data System (ADS)
Barrett, Adam B.; Barnett, Lionel; Seth, Anil K.
2010-04-01
Granger causality analysis is a popular method for inference on directed interactions in complex systems of many variables. A shortcoming of the standard framework for Granger causality is that it only allows for examination of interactions between single (univariate) variables within a system, perhaps conditioned on other variables. However, interactions do not necessarily take place between single variables but may occur among groups or “ensembles” of variables. In this study we establish a principled framework for Granger causality in the context of causal interactions among two or more multivariate sets of variables. Building on Geweke’s seminal 1982 work, we offer additional justifications for one particular form of multivariate Granger causality based on the generalized variances of residual errors. Taken together, our results support a comprehensive and theoretically consistent extension of Granger causality to the multivariate case. Treated individually, they highlight several specific advantages of the generalized variance measure, which we illustrate using applications in neuroscience as an example. We further show how the measure can be used to define “partial” Granger causality in the multivariate context and we also motivate reformulations of “causal density” and “Granger autonomy.” Our results are directly applicable to experimental data and promise to reveal new types of functional relations in complex systems, neural and otherwise.
Hybrid biasing approaches for global variance reduction.
Wu, Zeyun; Abdel-Khalik, Hany S
2013-02-01
A new variant of Monte Carlo-deterministic (DT) hybrid variance reduction approach based on Gaussian process theory is presented for accelerating convergence of Monte Carlo simulation and compared with Forward-Weighted Consistent Adjoint Driven Importance Sampling (FW-CADIS) approach implemented in the SCALE package from Oak Ridge National Laboratory. The new approach, denoted the Gaussian process approach, treats the responses of interest as normally distributed random processes. The Gaussian process approach improves the selection of the weight windows of simulated particles by identifying a subspace that captures the dominant sources of statistical response variations. Like the FW-CADIS approach, the Gaussian process approach utilizes particle importance maps obtained from deterministic adjoint models to derive weight window biasing. In contrast to the FW-CADIS approach, the Gaussian process approach identifies the response correlations (via a covariance matrix) and employs them to reduce the computational overhead required for global variance reduction (GVR) purpose. The effective rank of the covariance matrix identifies the minimum number of uncorrelated pseudo responses, which are employed to bias simulated particles. Numerical experiments, serving as a proof of principle, are presented to compare the Gaussian process and FW-CADIS approaches in terms of the global reduction in standard deviation of the estimated responses.
Multivariate Granger causality and generalized variance.
Barrett, Adam B; Barnett, Lionel; Seth, Anil K
2010-04-01
Granger causality analysis is a popular method for inference on directed interactions in complex systems of many variables. A shortcoming of the standard framework for Granger causality is that it only allows for examination of interactions between single (univariate) variables within a system, perhaps conditioned on other variables. However, interactions do not necessarily take place between single variables but may occur among groups or "ensembles" of variables. In this study we establish a principled framework for Granger causality in the context of causal interactions among two or more multivariate sets of variables. Building on Geweke's seminal 1982 work, we offer additional justifications for one particular form of multivariate Granger causality based on the generalized variances of residual errors. Taken together, our results support a comprehensive and theoretically consistent extension of Granger causality to the multivariate case. Treated individually, they highlight several specific advantages of the generalized variance measure, which we illustrate using applications in neuroscience as an example. We further show how the measure can be used to define "partial" Granger causality in the multivariate context and we also motivate reformulations of "causal density" and "Granger autonomy." Our results are directly applicable to experimental data and promise to reveal new types of functional relations in complex systems, neural and otherwise.
Abel, David L.
2011-01-01
Is life physicochemically unique? No. Is life unique? Yes. Life manifests innumerable formalisms that cannot be generated or explained by physicodynamics alone. Life pursues thousands of biofunctional goals, not the least of which is staying alive. Neither physicodynamics, nor evolution, pursue goals. Life is largely directed by linear digital programming and by the Prescriptive Information (PI) instantiated particularly into physicodynamically indeterminate nucleotide sequencing. Epigenomic controls only compound the sophistication of these formalisms. Life employs representationalism through the use of symbol systems. Life manifests autonomy, homeostasis far from equilibrium in the harshest of environments, positive and negative feedback mechanisms, prevention and correction of its own errors, and organization of its components into Sustained Functional Systems (SFS). Chance and necessity—heat agitation and the cause-and-effect determinism of nature’s orderliness—cannot spawn formalisms such as mathematics, language, symbol systems, coding, decoding, logic, organization (not to be confused with mere self-ordering), integration of circuits, computational success, and the pursuit of functionality. All of these characteristics of life are formal, not physical. PMID:25382119
77 FR 40735 - Unique Device Identification System
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-10
...The Food and Drug Administration (FDA) is proposing to establish a unique device identification system to implement the requirement added to the Federal Food, Drug, and Cosmetic Act (FD&C Act) by section 226 of the Food and Drug Administration Amendments Act of 2007 (FDAAA), Section 226 of FDAAA amended the FD&C Act to add new section 519(f), which directs FDA to promulgate regulations......
Clarke, Peter; Varghese, Philip; Goldstein, David
2014-12-09
We extend a variance reduced discrete velocity method developed at UT Austin [1, 2] to gas mixtures with large mass ratios and flows with trace species. The mixture is stored as a collection of independent velocity distribution functions, each with a unique grid in velocity space. Different collision types (A-A, A-B, B-B, etc.) are treated independently, and the variance reduction scheme is formulated with different equilibrium functions for each separate collision type. The individual treatment of species enables increased focus on species important to the physics of the flow, even if the important species are present in trace amounts. The method is verified through comparisons to Direct Simulation Monte Carlo computations and the computational workload per time step is investigated for the variance reduced method.
40 CFR 59.509 - Can I get a variance?
Code of Federal Regulations, 2012 CFR
2012-07-01
... its application is complete. (d) The Administrator will issue a variance if the criteria specified in... entity will achieve compliance with this subpart. (f) A variance will cease to be effective upon...
40 CFR 59.509 - Can I get a variance?
Code of Federal Regulations, 2014 CFR
2014-07-01
... its application is complete. (d) The Administrator will issue a variance if the criteria specified in... entity will achieve compliance with this subpart. (f) A variance will cease to be effective upon...
Applications of Variance Fractal Dimension: a Survey
NASA Astrophysics Data System (ADS)
Phinyomark, Angkoon; Phukpattaranont, Pornchai; Limsakul, Chusak
2012-04-01
Chaotic dynamical systems are pervasive in nature and can be shown to be deterministic through fractal analysis. There are numerous methods that can be used to estimate the fractal dimension. Among the usual fractal estimation methods, variance fractal dimension (VFD) is one of the most significant fractal analysis methods that can be implemented for real-time systems. The basic concept and theory of VFD are presented. Recent research and the development of several applications based on VFD are reviewed and explained in detail, such as biomedical signal processing and pattern recognition, speech communication, geophysical signal analysis, power systems and communication systems. The important parameters that need to be considered in computing the VFD are discussed, including the window size and the window increment of the feature, and the step size of the VFD. Directions for future research of VFD are also briefly outlined.
Jager, Justin; Bornstein, Marc H.; Diane, L. Putnick; Hendricks, Charlene
2012-01-01
Using the Family Assessment Device (FAD; Epstein, Baldwin, & Bishop, 1983) and incorporating the perspectives of adolescent, mother, and father, this study examined each family member's “unique perspective” or non-shared, idiosyncratic view of the family. To do so we used a modified multitrait-multimethod confirmatory factor analysis that (1) isolated for each family member's six reports of family dysfunction the non-shared variance (a combination of variance idiosyncratic to the individual and measurement error) from variance shared by one or more family members and (2) extracted common variance across each family member's set of non-shared variances. The sample included 128 families from a U.S. East Coast metropolitan area. Each family member's unique perspective generalized across his or her different reports of family dysfunction and accounted for a sizable proportion of his or her own variance in reports of family dysfunction. Additionally, after holding level of dysfunction constant across families and controlling for a family's shared variance (agreement regarding family dysfunction), each family member's unique perspective was associated with his or her own adjustment. Future applications and competing alternatives for what these “unique perspectives” reflect about the family are discussed. PMID:22545933
Considering Oil Production Variance as an Indicator of Peak Production
2010-06-07
Acquisition Cost ( IRAC ) Oil Prices. Source: Data used to construct graph acquired from the EIA (http://tonto.eia.doe.gov/country/timeline/oil_chronology.cfm...Acquisition Cost ( IRAC ). Production vs. Price – Variance Comparison Oil production variance and oil price variance have never been so far
A New Nonparametric Levene Test for Equal Variances
ERIC Educational Resources Information Center
Nordstokke, David W.; Zumbo, Bruno D.
2010-01-01
Tests of the equality of variances are sometimes used on their own to compare variability across groups of experimental or non-experimental conditions but they are most often used alongside other methods to support assumptions made about variances. A new nonparametric test of equality of variances is described and compared to current "gold…
Cyclostationary analysis with logarithmic variance stabilisation
NASA Astrophysics Data System (ADS)
Borghesani, Pietro; Shahriar, Md Rifat
2016-03-01
Second order cyclostationary (CS2) components in vibration or acoustic emission signals are typical symptoms of a wide variety of faults in rotating and alternating mechanical systems. The square envelope spectrum (SES), obtained via Hilbert transform of the original signal, is at the basis of the most common indicators used for detection of CS2 components. It has been shown that the SES is equivalent to an autocorrelation of the signal's discrete Fourier transform, and that CS2 components are a cause of high correlations in the frequency domain of the signal, thus resulting in peaks in the SES. Statistical tests have been proposed to determine if peaks in the SES are likely to belong to a normal variability in the signal or if they are proper symptoms of CS2 components. Despite the need for automated fault recognition and the theoretical soundness of these tests, this approach to machine diagnostics has been mostly neglected in industrial applications. In fact, in a series of experimental applications, even with proper pre-whitening steps, it has been found that healthy machines might produce high spectral correlations and therefore result in a highly biased SES distribution which might cause a series of false positives. In this paper a new envelope spectrum is defined, with the theoretical intent of rendering the hypothesis test variance-free. This newly proposed indicator will prove unbiased in case of multiple CS2 sources of spectral correlation, thus reducing the risk of false alarms.
Correcting an analysis of variance for clustering.
Hedges, Larry V; Rhoads, Christopher H
2011-02-01
A great deal of educational and social data arises from cluster sampling designs where clusters involve schools, classrooms, or communities. A mistake that is sometimes encountered in the analysis of such data is to ignore the effect of clustering and analyse the data as if it were based on a simple random sample. This typically leads to an overstatement of the precision of results and too liberal conclusions about precision and statistical significance of mean differences. This paper gives simple corrections to the test statistics that would be computed in an analysis of variance if clustering were (incorrectly) ignored. The corrections are multiplicative factors depending on the total sample size, the cluster size, and the intraclass correlation structure. For example, the corrected F statistic has Fisher's F distribution with reduced degrees of freedom. The corrected statistic reduces to the F statistic computed by ignoring clustering when the intraclass correlations are zero. It reduces to the F statistic computed using cluster means when the intraclass correlations are unity, and it is in between otherwise. A similar adjustment to the usual statistic for testing a linear contrast among group means is described.
NASA Astrophysics Data System (ADS)
Bena, Iosif; Heurtier, Lucien; Puhm, Andrea
2016-05-01
It was argued in [1] that the five-dimensional near-horizon extremal Kerr (NHEK) geometry can be embedded in String Theory as the infrared region of an infinite family of non-supersymmetric geometries that have D1, D5, momentum and KK monopole charges. We show that there exists a method to embed these geometries into asymptotically- {AdS}_3× {S}^3/{{Z}}_N solutions, and hence to obtain infinite families of flows whose infrared is NHEK. This indicates that the CFT dual to the NHEK geometry is the IR fixed point of a Renormalization Group flow from a known local UV CFT and opens the door to its explicit construction.
NASA Astrophysics Data System (ADS)
Cvetič, Mirjam; Papadimitriou, Ioannis
2016-12-01
We construct the holographic dictionary for both running and constant dilaton solutions of the two dimensional Einstein-Maxwell-Dilaton theory that is obtained by a circle reduction from Einstein-Hilbert gravity with negative cosmological constant in three dimensions. This specific model ensures that the dual theory has a well defined ultraviolet completion in terms of a two dimensional conformal field theory, but our results apply qualitatively to a wider class of two dimensional dilaton gravity theories. For each type of solutions we perform holographic renormalization, compute the exact renormalized one-point functions in the presence of arbitrary sources, and derive the asymptotic symmetries and the corresponding conserved charges. In both cases we find that the scalar operator dual to the dilaton plays a crucial role in the description of the dynamics. Its source gives rise to a matter conformal anomaly for the running dilaton solutions, while its expectation value is the only non trivial observable for constant dilaton solutions. The role of this operator has been largely overlooked in the literature. We further show that the only non trivial conserved charges for running dilaton solutions are the mass and the electric charge, while for constant dilaton solutions only the electric charge is non zero. However, by uplifting the solutions to three dimensions we show that constant dilaton solutions can support non trivial extended symmetry algebras, including the one found by Compère, Song and Strominger [1], in agreement with the results of Castro and Song [2]. Finally, we demonstrate that any solution of this specific dilaton gravity model can be uplifted to a family of asymptotically AdS2 × S 2 or conformally AdS2 × S 2 solutions of the STU model in four dimensions, including non extremal black holes. The four dimensional solutions obtained by uplifting the running dilaton solutions coincide with the so called `subtracted geometries', while those obtained
Higher Education Value Added Using Multiple Outcomes
ERIC Educational Resources Information Center
Milla, Joniada; Martín, Ernesto San; Van Bellegem, Sébastien
2016-01-01
In this article we develop a methodology for the joint value added analysis of multiple outcomes that takes into account the inherent correlation between them. This is especially crucial in the analysis of higher education institutions. We use a unique Colombian database on universities, which contains scores in five domains tested in a…
Some characterizations of unique extremality
NASA Astrophysics Data System (ADS)
Yao, Guowu
2008-07-01
In this paper, it is shown that some necessary characteristic conditions for unique extremality obtained by Zhu and Chen are also sufficient and some sufficient ones by them actually imply that the uniquely extremal Beltrami differentials have a constant modulus. In addition, some local properties of uniquely extremal Beltrami differentials are given.
Functional analysis of variance for association studies.
Vsevolozhskaya, Olga A; Zaykin, Dmitri V; Greenwood, Mark C; Wei, Changshuai; Lu, Qing
2014-01-01
While progress has been made in identifying common genetic variants associated with human diseases, for most of common complex diseases, the identified genetic variants only account for a small proportion of heritability. Challenges remain in finding additional unknown genetic variants predisposing to complex diseases. With the advance in next-generation sequencing technologies, sequencing studies have become commonplace in genetic research. The ongoing exome-sequencing and whole-genome-sequencing studies generate a massive amount of sequencing variants and allow researchers to comprehensively investigate their role in human diseases. The discovery of new disease-associated variants can be enhanced by utilizing powerful and computationally efficient statistical methods. In this paper, we propose a functional analysis of variance (FANOVA) method for testing an association of sequence variants in a genomic region with a qualitative trait. The FANOVA has a number of advantages: (1) it tests for a joint effect of gene variants, including both common and rare; (2) it fully utilizes linkage disequilibrium and genetic position information; and (3) allows for either protective or risk-increasing causal variants. Through simulations, we show that FANOVA outperform two popularly used methods - SKAT and a previously proposed method based on functional linear models (FLM), - especially if a sample size of a study is small and/or sequence variants have low to moderate effects. We conduct an empirical study by applying three methods (FANOVA, SKAT and FLM) to sequencing data from Dallas Heart Study. While SKAT and FLM respectively detected ANGPTL 4 and ANGPTL 3 associated with obesity, FANOVA was able to identify both genes associated with obesity.
Evaluating Value-Added Models for Teacher Accountability. Monograph
ERIC Educational Resources Information Center
McCaffrey, Daniel F.; Lockwood, J. R.; Koretz, Daniel M.; Hamilton, Laura S.
2003-01-01
Value-added modeling (VAM) to estimate school and teacher effects is currently of considerable interest to researchers and policymakers. Recent reports suggest that VAM demonstrates the importance of teachers as a source of variance in student outcomes. Policymakers see VAM as a possible component of education reform through improved teacher…
Estimating the encounter rate variance in distance sampling
Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.
2009-01-01
The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.
Event Segmentation Ability Uniquely Predicts Event Memory
Sargent, Jesse Q.; Zacks, Jeffrey M.; Hambrick, David Z.; Zacks, Rose T.; Kurby, Christopher A.; Bailey, Heather R.; Eisenberg, Michelle L.; Beck, Taylor M.
2013-01-01
Memory for everyday events plays a central role in tasks of daily living, autobiographical memory, and planning. Event memory depends in part on segmenting ongoing activity into meaningful units. This study examined the relationship between event segmentation and memory in a lifespan sample to answer the following question: Is the ability to segment activity into meaningful events a unique predictor of subsequent memory, or is the relationship between event perception and memory accounted for by general cognitive abilities? Two hundred and eight adults ranging from 20 to 79 years old segmented movies of everyday events and attempted to remember the events afterwards. They also completed psychometric ability tests and tests measuring script knowledge for everyday events. Event segmentation and script knowledge both explained unique variance in event memory above and beyond the psychometric measures, and did so as strongly in older as in younger adults. These results suggest that event segmentation is a basic cognitive mechanism, important for memory across the lifespan. PMID:23942350
Multiperiod Mean-Variance Portfolio Optimization via Market Cloning
Ankirchner, Stefan; Dermoune, Azzouz
2011-08-15
The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.
Network Structure and Biased Variance Estimation in Respondent Driven Sampling.
Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J
2015-01-01
This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.
Network Structure and Biased Variance Estimation in Respondent Driven Sampling
Verdery, Ashton M.; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J.
2015-01-01
This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927
RR-Interval variance of electrocardiogram for atrial fibrillation detection
NASA Astrophysics Data System (ADS)
Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.
2016-11-01
Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.
Simulations of the Hadamard Variance: Probability Distributions and Confidence Intervals.
Ashby, Neil; Patla, Bijunath
2016-04-01
Power-law noise in clocks and oscillators can be simulated by Fourier transforming a modified spectrum of white phase noise. This approach has been applied successfully to simulation of the Allan variance and the modified Allan variance in both overlapping and nonoverlapping forms. When significant frequency drift is present in an oscillator, at large sampling times the Allan variance overestimates the intrinsic noise, while the Hadamard variance is insensitive to frequency drift. The simulation method is extended in this paper to predict the Hadamard variance for the common types of power-law noise. Symmetric real matrices are introduced whose traces-the sums of their eigenvalues-are equal to the Hadamard variances, in overlapping or nonoverlapping forms, as well as for the corresponding forms of the modified Hadamard variance. We show that the standard relations between spectral densities and Hadamard variance are obtained with this method. The matrix eigenvalues determine probability distributions for observing a variance at an arbitrary value of the sampling interval τ, and hence for estimating confidence in the measurements.
NASA Astrophysics Data System (ADS)
McCaffrey, Katherine; Bianco, Laura; Johnston, Paul; Wilczak, James M.
2017-03-01
Observations of turbulence in the planetary boundary layer are critical for developing and evaluating boundary layer parameterizations in mesoscale numerical weather prediction models. These observations, however, are expensive and rarely profile the entire boundary layer. Using optimized configurations for 449 and 915 MHz wind profiling radars during the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA), improvements have been made to the historical methods of measuring vertical velocity variance through the time series of vertical velocity, as well as the Doppler spectral width. Using six heights of sonic anemometers mounted on a 300 m tower, correlations of up to R2 = 0. 74 are seen in measurements of the large-scale variances from the radar time series and R2 = 0. 79 in measurements of small-scale variance from radar spectral widths. The total variance, measured as the sum of the small and large scales, agrees well with sonic anemometers, with R2 = 0. 79. Correlation is higher in daytime convective boundary layers than nighttime stable conditions when turbulence levels are smaller. With the good agreement with the in situ measurements, highly resolved profiles up to 2 km can be accurately observed from the 449 MHz radar and 1 km from the 915 MHz radar. This optimized configuration will provide unique observations for the verification and improvement to boundary layer parameterizations in mesoscale models.
Secure ADS-B authentication system and method
NASA Technical Reports Server (NTRS)
Viggiano, Marc J (Inventor); Valovage, Edward M (Inventor); Samuelson, Kenneth B (Inventor); Hall, Dana L (Inventor)
2010-01-01
A secure system for authenticating the identity of ADS-B systems, including: an authenticator, including a unique id generator and a transmitter transmitting the unique id to one or more ADS-B transmitters; one or more ADS-B transmitters, including a receiver receiving the unique id, one or more secure processing stages merging the unique id with the ADS-B transmitter's identification, data and secret key and generating a secure code identification and a transmitter transmitting a response containing the secure code and ADSB transmitter's data to the authenticator; the authenticator including means for independently determining each ADS-B transmitter's secret key, a receiver receiving each ADS-B transmitter's response, one or more secure processing stages merging the unique id, ADS-B transmitter's identification and data and generating a secure code, and comparison processing comparing the authenticator-generated secure code and the ADS-B transmitter-generated secure code and providing an authentication signal based on the comparison result.
Uniqueness Theorem for Black Objects
Rogatko, Marek
2010-06-23
We shall review the current status of uniqueness theorem for black objects in higher dimensional spacetime. At the beginning we consider static charged asymptotically flat spacelike hypersurface with compact interior with both degenerate and non-degenerate components of the event horizon in n-dimensional spacetime. We gave some remarks concerning partial results in proving uniqueness of stationary axisymmetric multidimensional solutions and winding numbers which can uniquely characterize the topology and symmetry structure of black objects.
Marini, Federico; de Beer, Dalene; Joubert, Elizabeth; Walczak, Beata
2015-07-31
Direct application of popular approaches, e.g., Principal Component Analysis (PCA) or Partial Least Squares (PLS) to chromatographic data originating from a well-designed experimental study including more than one factor is not recommended. In the case of a well-designed experiment involving two or more factors (crossed or nested), data are usually decomposed into the contributions associated with the studied factors (and with their interactions), and the individual effect matrices are then analyzed using, e.g., PCA, as in the case of ASCA (analysis of variance combined with simultaneous component analysis). As an alternative to the ASCA method, we propose the application of PLS followed by target projection (TP), which allows a one-factor representation of the model for each column in the design dummy matrix. PLS application follows after proper deflation of the experimental matrix, i.e., to what are called the residuals under the reduced ANOVA model. The proposed approach (ANOVA-TP) is well suited for the study of designed chromatographic data of complex samples. It allows testing of statistical significance of the studied effects, 'biomarker' identification, and enables straightforward visualization and accurate estimation of between- and within-class variance. The proposed approach has been successfully applied to a case study aimed at evaluating the effect of pasteurization on the concentrations of various phenolic constituents of rooibos tea of different quality grades and its outcomes have been compared to those of ASCA.
Harrison, Christopher; Charles, Janice; Britt, Helena
2008-06-01
The BEACH program (Bettering the Evaluation and Care of Health) shows that management of attention deficit (hyperactivity) disorder (AD(H)D) was rare in general practice, occurring only six times per 1,000 encounters with children aged 5-17 years, between April 2000 and December 2007. This suggests that general practitioners manage AD(H)D about 46,000 times for this age group nationally each year.
NASA Technical Reports Server (NTRS)
Clauson, J.; Heuser, J.
1981-01-01
The Applications Data Service (ADS) is a system based on an electronic data communications network which will permit scientists to share the data stored in data bases at universities and at government and private installations. It is designed to allow users to readily locate and access high quality, timely data from multiple sources. The ADS Pilot program objectives and the current plans for accomplishing those objectives are described.
AdS and Lifshitz scalar hairy black holes in Gauss-Bonnet gravity
NASA Astrophysics Data System (ADS)
Chen, Bin; Fan, Zhong-Ying; Zhu, Lu-Yao
2016-09-01
We consider Gauss-Bonnet (GB) gravity in general dimensions, which is nonminimally coupled to a scalar field. By choosing a scalar potential of the type V (ϕ )=2 Λ0+1/2 m2ϕ2+γ4ϕ4 , we first obtain large classes of scalar hairy black holes with spherical/hyperbolic/planar topologies that are asymptotic to locally anti- de Sitter (AdS) space-times. We derive the first law of black hole thermodynamics using Wald formalism. In particular, for one class of the solutions, the scalar hair forms a thermodynamic conjugate with the graviton and nontrivially contributes to the thermodynamical first law. We observe that except for one class of the planar black holes, all these solutions are constructed at the critical point of GB gravity where there exist unique AdS vacua. In fact, a Lifshitz vacuum is also allowed at the critical point. We then construct many new classes of neutral and charged Lifshitz black hole solutions for an either minimally or nonminimally coupled scalar and derive the thermodynamical first laws. We also obtain new classes of exact dynamical AdS and Lifshitz solutions which describe radiating white holes. The solutions eventually become AdS or Lifshitz vacua at late retarded times. However, for one class of the solutions, the final state is an AdS space-time with a globally naked singularity.
29 CFR 1905.5 - Effect of variances.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 5 2010-07-01 2010-07-01 false Effect of variances. 1905.5 Section 1905.5 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RULES OF PRACTICE FOR VARIANCES, LIMITATIONS, VARIATIONS, TOLERANCES, AND EXEMPTIONS UNDER THE WILLIAMS-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 36 Parks, Forests, and Public Property 1 2013-07-01 2013-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 36 Parks, Forests, and Public Property 1 2012-07-01 2012-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 36 Parks, Forests, and Public Property 1 2014-07-01 2014-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 36 Parks, Forests, and Public Property 1 2011-07-01 2011-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
Relating the Hadamard Variance to MCS Kalman Filter Clock Estimation
NASA Technical Reports Server (NTRS)
Hutsell, Steven T.
1996-01-01
The Global Positioning System (GPS) Master Control Station (MCS) currently makes significant use of the Allan Variance. This two-sample variance equation has proven excellent as a handy, understandable tool, both for time domain analysis of GPS cesium frequency standards, and for fine tuning the MCS's state estimation of these atomic clocks. The Allan Variance does not explicitly converge for the nose types of alpha less than or equal to minus 3 and can be greatly affected by frequency drift. Because GPS rubidium frequency standards exhibit non-trivial aging and aging noise characteristics, the basic Allan Variance analysis must be augmented in order to (a) compensate for a dynamic frequency drift, and (b) characterize two additional noise types, specifically alpha = minus 3, and alpha = minus 4. As the GPS program progresses, we will utilize a larger percentage of rubidium frequency standards than ever before. Hence, GPS rubidium clock characterization will require more attention than ever before. The three sample variance, commonly referred to as a renormalized Hadamard Variance, is unaffected by linear frequency drift, converges for alpha is greater than minus 5, and thus has utility for modeling noise in GPS rubidium frequency standards. This paper demonstrates the potential of Hadamard Variance analysis in GPS operations, and presents an equation that relates the Hadamard Variance to the MCS's Kalman filter process noises.
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 24 2013-07-01 2013-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 23 2011-07-01 2011-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....
Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances
ERIC Educational Resources Information Center
Jan, Show-Li; Shieh, Gwowen
2014-01-01
The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…
76 FR 78698 - Proposed Revocation of Permanent Variances
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-19
... Occupational Safety and Health Administration Proposed Revocation of Permanent Variances AGENCY: Occupational... short and plain statement detailing (1) how the proposed revocation would affect the requesting party..., subpart L. The following table provides information about the variances proposed for revocation by...
Gender Variance and Educational Psychology: Implications for Practice
ERIC Educational Resources Information Center
Yavuz, Carrie
2016-01-01
The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…
42 CFR 456.522 - Content of request for variance.
Code of Federal Regulations, 2011 CFR
2011-10-01
... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time..., mental hospital, and ICF located within a 50-mile radius of the facility; (e) The distance and...
A Study of Variance Estimation Methods. Working Paper Series.
ERIC Educational Resources Information Center
Zhang, Fan; Weng, Stanley; Salvucci, Sameena; Hu, Ming-xiu
This working paper contains reports of five studies of variance estimation methods. The first, An Empirical Study of Poststratified Estimator, by Fan Zhang uses data from the National Household Education Survey to illustrate use of poststratified estimation. The second paper, BRR Variance Estimation Using BPLX Hadamard Procedure, by Stanley Weng…
Genotypic-specific variance in Caenorhabditis elegans lifetime fecundity
Diaz, S Anaid; Viney, Mark
2014-01-01
Organisms live in heterogeneous environments, so strategies that maximze fitness in such environments will evolve. Variation in traits is important because it is the raw material on which natural selection acts during evolution. Phenotypic variation is usually thought to be due to genetic variation and/or environmentally induced effects. Therefore, genetically identical individuals in a constant environment should have invariant traits. Clearly, genetically identical individuals do differ phenotypically, usually thought to be due to stochastic processes. It is now becoming clear, especially from studies of unicellular species, that phenotypic variance among genetically identical individuals in a constant environment can be genetically controlled and that therefore, in principle, this can be subject to selection. However, there has been little investigation of these phenomena in multicellular species. Here, we have studied the mean lifetime fecundity (thus a trait likely to be relevant to reproductive success), and variance in lifetime fecundity, in recently-wild isolates of the model nematode Caenorhabditis elegans. We found that these genotypes differed in their variance in lifetime fecundity: some had high variance in fecundity, others very low variance. We find that this variance in lifetime fecundity was negatively related to the mean lifetime fecundity of the lines, and that the variance of the lines was positively correlated between environments. We suggest that the variance in lifetime fecundity may be a bet-hedging strategy used by this species. PMID:25360248
Conceptual Complexity and the Bias/Variance Tradeoff
ERIC Educational Resources Information Center
Briscoe, Erica; Feldman, Jacob
2011-01-01
In this paper we propose that the conventional dichotomy between exemplar-based and prototype-based models of concept learning is helpfully viewed as an instance of what is known in the statistical learning literature as the "bias/variance tradeoff". The bias/variance tradeoff can be thought of as a sliding scale that modulates how closely any…
Variances and Covariances of Kendall's Tau and Their Estimation.
ERIC Educational Resources Information Center
Cliff, Norman; Charlin, Ventura
1991-01-01
Variance formulas of H. E. Daniels and M. G. Kendall (1947) are generalized to allow for the presence of ties and variance of the sample tau correlation. Applications of these generalized formulas are discussed and illustrated using data from a 1965 study of contraceptive use in 15 developing countries. (SLD)
Exploring Unique Roles for Psychologists
ERIC Educational Resources Information Center
Ahmed, Mohiuddin; Boisvert, Charles M.
2005-01-01
This paper presents comments on "Psychological Treatments" by D. H. Barlow. Barlow highlighted unique roles that psychologists can play in mental health service delivery by providing psychological treatments--treatments that psychologists would be uniquely qualified to design and deliver. In support of Barlow's position, the authors draw from…
ERIC Educational Resources Information Center
Shipman, Barbara A.
2013-01-01
This article analyzes four questions on the meaning of uniqueness that have contrasting answers in common language versus mathematical language. The investigations stem from a scenario in which students interpreted uniqueness according to a definition from standard English, that is, different from the mathematical meaning, in defining an injective…
Confabulators mistake multiplicity for uniqueness.
Serra, Mara; La Corte, Valentina; Migliaccio, Raffaella; Brazzarola, Marta; Zannoni, Ilaria; Pradat-Diehl, Pascale; Dalla Barba, Gianfranco
2014-09-01
Some patients with organic amnesia show confabulation, the production of statements and actions unintentionally incongruous to the subject's history, present and future situation. It has been shown that confabulators tend to report as unique and specific personal memories, events or actions that belong to their habits and routines (Habits Confabulations). We consider that habits and routines can be characterized by multiplicity, as opposed to uniqueness. This paper examines this phenomenon whereby confabulators mistake multiplicity, i.e., repeated events, for uniqueness, i.e., events that occurred in a unique and specific temporo-spatial context. In order to measure the ability to discriminate unique from repeated events we used four runs of a recognition memory task, in which some items were seen only once at study, whereas others were seen four times. Confabulators, but not non-confabulating amnesiacs (NCA), considered repeated items as unique, thus mistaking multiplicity for uniqueness. This phenomenon has been observed clinically but our study is the first to demonstrate it experimentally. We suggest that a crucial mechanism involved in the production of confabulations is thus the confusion between unique and repeated events.
ERIC Educational Resources Information Center
Castellanos-Ryan, Natalie; Conrod, Patricia J.
2011-01-01
Externalising behaviours such as substance misuse (SM) and conduct disorder (CD) symptoms highly co-ocurr in adolescence. While disinhibited personality traits have been consistently linked to externalising behaviours there is evidence that these traits may relate differentially to SM and CD. The current study aimed to assess whether this was the…
Unique associations between anxiety, depression and motives for approach and avoidance goal pursuit.
Winch, Alison; Moberly, Nicholas J; Dickson, Joanne M
2015-01-01
This study investigated the shared and distinct associations between depressive and anxious symptoms and motives for pursuing personal goals. One hundred and thirty-six undergraduates generated approach and avoidance goals and rated each on intrinsic, identified, introjected and external motives. Anxious and depressive symptoms showed significant unique associations with distinct motives. Specifically, depressive symptoms predicted significant unique variance in intrinsic motivation for approach goals (but not avoidance goals), whereas anxious symptoms predicted significant unique variance in introjected regulation for approach and avoidance goals. Some of these findings were moderated by gender. The findings broadly support the notion that depression is uniquely characterised by reduced enjoyment of approach goal pursuit whereas anxiety is uniquely characterised by pursuit of goals in order to avoid negative outcomes. We suggest that these findings are compatible with regulatory focus theory and suggest that motives for goal pursuit are important in understanding the relation between goals and specific mood disorder symptoms.
Code of Federal Regulations, 2011 CFR
2011-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
Code of Federal Regulations, 2013 CFR
2013-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
Code of Federal Regulations, 2014 CFR
2014-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
Code of Federal Regulations, 2010 CFR
2010-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
Code of Federal Regulations, 2012 CFR
2012-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
Comparing estimates of genetic variance across different relationship models.
Legarra, Andres
2016-02-01
Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities".
Filtered kriging for spatial data with heterogeneous measurement error variances.
Christensen, William F
2011-09-01
When predicting values for the measurement-error-free component of an observed spatial process, it is generally assumed that the process has a common measurement error variance. However, it is often the case that each measurement in a spatial data set has a known, site-specific measurement error variance, rendering the observed process nonstationary. We present a simple approach for estimating the semivariogram of the unobservable measurement-error-free process using a bias adjustment of the classical semivariogram formula. We then develop a new kriging predictor that filters the measurement errors. For scenarios where each site's measurement error variance is a function of the process of interest, we recommend an approach that also uses a variance-stabilizing transformation. The properties of the heterogeneous variance measurement-error-filtered kriging (HFK) predictor and variance-stabilized HFK predictor, and the improvement of these approaches over standard measurement-error-filtered kriging are demonstrated using simulation. The approach is illustrated with climate model output from the Hudson Strait area in northern Canada. In the illustration, locations with high or low measurement error variances are appropriately down- or upweighted in the prediction of the underlying process, yielding a realistically smooth picture of the phenomenon of interest.
Assured Information Sharing for Ad-Hoc Collaboration
ERIC Educational Resources Information Center
Jin, Jing
2009-01-01
Collaborative information sharing tends to be highly dynamic and often ad hoc among organizations. The dynamic natures and sharing patterns in ad-hoc collaboration impose a need for a comprehensive and flexible approach to reflecting and coping with the unique access control requirements associated with the environment. This dissertation…
Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation
NASA Technical Reports Server (NTRS)
Wu, Dong L.; Eckermann, Stephen D.
2008-01-01
The gravity wave (GW)-resolving capabilities of 118-GHz saturated thermal radiances acquired throughout the stratosphere by the Microwave Limb Sounder (MLS) on the Aura satellite are investigated and initial results presented. Because the saturated (optically thick) radiances resolve GW perturbations from a given altitude at different horizontal locations, variances are evaluated at 12 pressure altitudes between 21 and 51 km using the 40 saturated radiances found at the bottom of each limb scan. Forward modeling simulations show that these variances are controlled mostly by GWs with vertical wavelengths z 5 km and horizontal along-track wavelengths of y 100-200 km. The tilted cigar-shaped three-dimensional weighting functions yield highly selective responses to GWs of high intrinsic frequency that propagate toward the instrument. The latter property is used to infer the net meridional component of GW propagation by differencing the variances acquired from ascending (A) and descending (D) orbits. Because of improved vertical resolution and sensitivity, Aura MLS GW variances are 5?8 times larger than those from the Upper Atmosphere Research Satellite (UARS) MLS. Like UARS MLS variances, monthly-mean Aura MLS variances in January and July 2005 are enhanced when local background wind speeds are large, due largely to GW visibility effects. Zonal asymmetries in variance maps reveal enhanced GW activity at high latitudes due to forcing by flow over major mountain ranges and at tropical and subtropical latitudes due to enhanced deep convective generation as inferred from contemporaneous MLS cloud-ice data. At 21-28-km altitude (heights not measured by the UARS MLS), GW variance in the tropics is systematically enhanced and shows clear variations with the phase of the quasi-biennial oscillation, in general agreement with GW temperature variances derived from radiosonde, rocketsonde, and limb-scan vertical profiles.
Images are added via the Drupal WebCMS Editor. Once an image is uploaded onto a page, it is available via the Library and your files. You can edit the metadata, delete the image permanently, and/or replace images on the Files tab.
ERIC Educational Resources Information Center
Richards, Andrew
2015-01-01
Two quantitative measures of school performance are currently used, the average points score (APS) at Key Stage 2 and value-added (VA), which measures the rate of academic improvement between Key Stage 1 and 2. These figures are used by parents and the Office for Standards in Education to make judgements and comparisons. However, simple…
Diabetes: Unique to Older Adults
... Stroke Urinary Incontinence Related Documents PDF Choosing Wisely: Diabetes Tests and Treatments Download Related Video Join our e-newsletter! Aging & Health A to Z Diabetes Unique to Older Adults This section provides information ...
NASA Astrophysics Data System (ADS)
Demleitner, M.; Eichhorn, G.; Grant, C. S.; Accomazzi, A.; Murray, S. S.; Kurtz, M. J.
1999-05-01
The bibliographic databases maintained by the NASA Astrophysics Data System are updated approximately biweekly with records gathered from over 125 sources all over the world. Data are either sent to us electronically, retrieved by our staff via semi-automated procedures, or entered in our databases through supervised OCR procedures. PERL scripts are run on the data to convert them from their incoming format to our standard format so that they can be added to the master database at SAO. Once new data has been added, separate index files are created for authors, objects, title words, and text word, allowing these fields to be searched for individually or in combination with each other. During the indexing procedure, discipline-specific knowledge is taken into account through the use of rule-based procedures performing string normalization, context-sensitive word translation, and synonym and stop word replacement. Once the master text and index files have been updated at SAO, an automated procedure mirrors the changes in the database to the ADS mirror site via a secure network connection. The use of a public domain software tool called rsync allows incremental updating of the database files, with significant savings in the amount of data being transferred. In the past year, the ADS Abstract Service databases have grown by approximately 30%, including 50% growth in Physics, 25% growth in Astronomy and 10% growth in the Instrumentation datasets. The ADS Abstract Service now contains over 1.4 million abstracts (475K in Astronomy, 430K in Physics, 510K in Instrumentation, and 3K in Preprints), 175,000 journal abstracts, and 115,000 full text articles. In addition, we provide links to over 40,000 electronic HTML articles at other sites, 20,000 PDF articles, and 10,000 postscript articles, as well as many links to other external data sources.
Hickey, John M; Veerkamp, Roel F; Calus, Mario P L; Mulder, Han A; Thompson, Robin
2009-02-09
Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sampling can be used to calculate approximations of the prediction error variance, which converge to the true values if enough samples are used. However, in practical situations the number of samples, which are computationally feasible, is limited. The objective of this study was to compare the convergence rate of different formulations of the prediction error variance calculated using Monte Carlo sampling. Four of these formulations were published, four were corresponding alternative versions, and two were derived as part of this study. The different formulations had different convergence rates and these were shown to depend on the number of samples and on the level of prediction error variance. Four formulations were competitive and these made use of information on either the variance of the estimated breeding value and on the variance of the true breeding value minus the estimated breeding value or on the covariance between the true and estimated breeding values.
Friede, Tim; Kieser, Meinhard
2013-01-01
The internal pilot study design allows for modifying the sample size during an ongoing study based on a blinded estimate of the variance thus maintaining the trial integrity. Various blinded sample size re-estimation procedures have been proposed in the literature. We compare the blinded sample size re-estimation procedures based on the one-sample variance of the pooled data with a blinded procedure using the randomization block information with respect to bias and variance of the variance estimators, and the distribution of the resulting sample sizes, power, and actual type I error rate. For reference, sample size re-estimation based on the unblinded variance is also included in the comparison. It is shown that using an unbiased variance estimator (such as the one using the randomization block information) for sample size re-estimation does not guarantee that the desired power is achieved. Moreover, in situations that are common in clinical trials, the variance estimator that employs the randomization block length shows a higher variability than the simple one-sample estimator and in turn the sample size resulting from the related re-estimation procedure. This higher variability can lead to a lower power as was demonstrated in the setting of noninferiority trials. In summary, the one-sample estimator obtained from the pooled data is extremely simple to apply, shows good performance, and is therefore recommended for application.
The Placenta Harbors a Unique Microbiome
Aagaard, Kjersti; Ma, Jun; Antony, Kathleen M.; Ganu, Radhika; Petrosino, Joseph; Versalovic, James
2016-01-01
Humans and their microbiomes have coevolved as a physiologic community composed of distinct body site niches with metabolic and antigenic diversity. The placental microbiome has not been robustly interrogated, despite recent demonstrations of intracellular bacteria with diverse metabolic and immune regulatory functions. A population-based cohort of placental specimens collected under sterile conditions from 320 subjects with extensive clinical data was established for comparative 16S ribosomal DNA–based and whole-genome shotgun (WGS) metagenomic studies. Identified taxa and their gene carriage patterns were compared to other human body site niches, including the oral, skin, airway (nasal), vaginal, and gut microbiomes from nonpregnant controls. We characterized a unique placental microbiome niche, composed of nonpathogenic commensal microbiota from the Firmicutes, Tenericutes, Proteobacteria, Bacteroidetes, and Fusobacteria phyla. In aggregate, the placental microbiome profiles were most akin (Bray-Curtis dissimilarity <0.3) to the human oral microbiome. 16S-based operational taxonomic unit analyses revealed associations of the placental microbiome with a remote history of antenatal infection (permutational multivariate analysis of variance, P = 0.006), such as urinary tract infection in the first trimester, as well as with preterm birth <37 weeks (P = 0.001). PMID:24848255
40 CFR 59.509 - Can I get a variance?
Code of Federal Regulations, 2011 CFR
2011-07-01
...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...
40 CFR 59.509 - Can I get a variance?
Code of Federal Regulations, 2013 CFR
2013-07-01
...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...
RISK ANALYSIS, ANALYSIS OF VARIANCE: GETTING MORE FROM OUR DATA
Technology Transfer Automated Retrieval System (TEKTRAN)
Analysis of variance (ANOVA) and regression are common statistical techniques used to analyze agronomic experimental data and determine significant differences among yields due to treatments or other experimental factors. Risk analysis provides an alternate and complimentary examination of the same...
Some variance reduction methods for numerical stochastic homogenization.
Blanc, X; Le Bris, C; Legoll, F
2016-04-28
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here.
Hidden item variance in multiple mini-interview scores.
Zaidi, Nikki L Bibler; Swoboda, Christopher M; Kelcey, Benjamin M; Manuel, R Stephen
2017-05-01
The extant literature has largely ignored a potentially significant source of variance in multiple mini-interview (MMI) scores by "hiding" the variance attributable to the sample of attributes used on an evaluation form. This potential source of hidden variance can be defined as rating items, which typically comprise an MMI evaluation form. Due to its multi-faceted, repeated measures format, reliability for the MMI has been primarily evaluated using generalizability (G) theory. A key assumption of G theory is that G studies model the most important sources of variance to which a researcher plans to generalize. Because G studies can only attribute variance to the facets that are modeled in a G study, failure to model potentially substantial sources of variation in MMI scores can result in biased estimates of variance components. This study demonstrates the implications of hiding the item facet in MMI studies when true item-level effects exist. An extensive Monte Carlo simulation study was conducted to examine whether a commonly used hidden item, person-by-station (p × s|i) G study design results in biased estimated variance components. Estimates from this hidden item model were compared with estimates from a more complete person-by-station-by-item (p × s × i) model. Results suggest that when true item-level effects exist, the hidden item model (p × s|i) will result in biased variance components which can bias reliability estimates; therefore, researchers should consider using the more complete person-by-station-by-item model (p × s × i) when evaluating generalizability of MMI scores.
Allan variance of time series models for measurement data
NASA Astrophysics Data System (ADS)
Zhang, Nien Fan
2008-10-01
The uncertainty of the mean of autocorrelated measurements from a stationary process has been discussed in the literature. However, when the measurements are from a non-stationary process, how to assess their uncertainty remains unresolved. Allan variance or two-sample variance has been used in time and frequency metrology for more than three decades as a substitute for the classical variance to characterize the stability of clocks or frequency standards when the underlying process is a 1/f noise process. However, its applications are related only to the noise models characterized by the power law of the spectral density. In this paper, from the viewpoint of the time domain, we provide a statistical underpinning of the Allan variance for discrete stationary processes, random walk and long-memory processes such as the fractional difference processes including the noise models usually considered in time and frequency metrology. Results show that the Allan variance is a better measure of the process variation than the classical variance of the random walk and the non-stationary fractional difference processes including the 1/f noise.
Variance estimation in the analysis of microarray data.
Wang, Yuedong; Ma, Yanyuan; Carroll, Raymond J
2009-04-01
Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.
The evolution and consequences of sex-specific reproductive variance.
Mullon, Charles; Reuter, Max; Lehmann, Laurent
2014-01-01
Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction.
The Evolution and Consequences of Sex-Specific Reproductive Variance
Mullon, Charles; Reuter, Max; Lehmann, Laurent
2014-01-01
Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction. PMID:24172130
Variance estimation for systematic designs in spatial surveys.
Fewster, R M
2011-12-01
In spatial surveys for estimating the density of objects in a survey region, systematic designs will generally yield lower variance than random designs. However, estimating the systematic variance is well known to be a difficult problem. Existing methods tend to overestimate the variance, so although the variance is genuinely reduced, it is over-reported, and the gain from the more efficient design is lost. The current approaches to estimating a systematic variance for spatial surveys are to approximate the systematic design by a random design, or approximate it by a stratified design. Previous work has shown that approximation by a random design can perform very poorly, while approximation by a stratified design is an improvement but can still be severely biased in some situations. We develop a new estimator based on modeling the encounter process over space. The new "striplet" estimator has negligible bias and excellent precision in a wide range of simulation scenarios, including strip-sampling, distance-sampling, and quadrat-sampling surveys, and including populations that are highly trended or have strong aggregation of objects. We apply the new estimator to survey data for the spotted hyena (Crocuta crocuta) in the Serengeti National Park, Tanzania, and find that the reported coefficient of variation for estimated density is 20% using approximation by a random design, 17% using approximation by a stratified design, and 11% using the new striplet estimator. This large reduction in reported variance is verified by simulation.
NASA Astrophysics Data System (ADS)
Goodman, Alyssa
We will create the first interactive sky map of astronomers' understanding of the Universe over time. We will accomplish this goal by turning the NASA Astrophysics Data System (ADS), widely known for its unrivaled value as a literature resource, into a data resource. GIS and GPS systems have made it commonplace to see and explore information about goings-on on Earth in the context of maps and timelines. Our proposal shows an example of a program that lets a user explore which countries have been mentioned in the New York Times, on what dates, and in what kinds of articles. By analogy, the goal of our project is to enable this kind of exploration-on the sky-for the full corpus of astrophysical literature available through ADS. Our group's expertise and collaborations uniquely position us to create this interactive sky map of the literature, which we call the "ADS All-Sky Survey." To create this survey, here are the principal steps we need to follow. First, by analogy to "geotagging," we will "astrotag," the ADS literature. Many "astrotags" effectively already exist, thanks to curation efforts at both CDS and NED. These efforts have created links to "source" positions on the sky associated with each of the millions of articles in the ADS. Our collaboration with ADS and CDS will let us automatically extract astrotags for all existing and future ADS holdings. The new ADS Labs, which our group helps to develop, includes the ability for researchers to filter article search results using a variety of "facets" (e.g. sources, keywords, authors, observatories, etc.). Using only extracted astrotags and facets, we can create functionality like what is described in the Times example above: we can offer a map of the density of positions' "mentions" on the sky, filterable by the properties of those mentions. Using this map, researchers will be able to interactively, visually, discover what regions have been studied for what reasons, at what times, and by whom. Second, where
Vanderheyden, Yoachim; Broeckhoven, Ken; Desmet, Gert
2014-10-17
Different automatic peak integration methods have been reviewed and compared for their ability to accurately determine the variance of the very narrow and very fast eluting peaks encountered when measuring the instrument band broadening of today's low dispersion liquid chromatography instruments. Using fully maximized injection concentrations to work at the highest possible signal-to-noise ratio's (SNR), the best results were obtained with the so-called variance profile analysis method. This is an extension (supplemented with a user-independent read-out algorithm) of a recently proposed method which calculates the peak variance value for any possible value of the peak end time, providing a curve containing all the possible variance values and theoretically levelling off to the (best possible estimate of the) true variance. Despite the use of maximal injection concentrations (leading to SNRs over 10,000), the peak variance errors were of the order of some 10-20%, mostly depending on the peak tail characteristics. The accuracy could however be significantly increased (to an error level below 0.5-2%) by averaging over 10-15 subsequent measurements, or by first adding the peak profiles of 10-15 subsequent runs and then analyzing this summed peak. There also appears to be an optimal detector intermediate frequency, with the higher frequencies suffering from their poorer signal-to-noise-ratio and with the smaller detector frequencies suffering from a limited number of data points. When the SNR drops below 1000, an accurate determination of the true variance of extra-column peaks of modern instruments no longer seems to be possible.
1978-12-01
AD-A041/70 ,4 Poperty of US Air .For, AAIWZ L1 brary AFFDLTR 78-179 ’Wrlght.Peatt Orson AF’B, EFFECT OF VARIANCES AND MANUFACTURING TOLERANCES ON...Degradation For Advanced Composites", Lockheed-California F33615-77-C-3084, Quar- terlies 1977 to Present. Phillips, D. C. and Scott , J. M., "The Shear
Uniqueness theorems in bioluminescence tomography.
Wang, Ge; Li, Yi; Jiang, Ming
2004-08-01
Motivated by bioluminescent imaging needs for studies on gene therapy and other applications in the mouse models, a bioluminescence tomography (BLT) system is being developed in the University of Iowa. While the forward imaging model is described by the well-known diffusion equation, the inverse problem is to recover an internal bioluminescent source distribution subject to Cauchy data. Our primary goal in this paper is to establish the solution uniqueness for BLT under practical constraints despite the ill-posedness of the inverse problem in the general case. After a review on the inverse source literature, we demonstrate that in the general case the BLT solution is not unique by constructing the set of all the solutions to this inverse problem. Then, we show the uniqueness of the solution in the case of impulse sources. Finally, we present our main theorem that solid/hollow ball sources can be uniquely determined up to nonradiating sources. For better readability, the exact conditions for and rigorous proofs of the theorems are given in the Appendices. Further research directions are also discussed.
Estimating Variances of Horizontal Wind Fluctuations in Stable Conditions
NASA Astrophysics Data System (ADS)
Luhar, Ashok K.
2010-05-01
Information concerning the average wind speed and the variances of lateral and longitudinal wind velocity fluctuations is required by dispersion models to characterise turbulence in the atmospheric boundary layer. When the winds are weak, the scalar average wind speed and the vector average wind speed need to be clearly distinguished and both lateral and longitudinal wind velocity fluctuations assume equal importance in dispersion calculations. We examine commonly-used methods of estimating these variances from wind-speed and wind-direction statistics measured separately, for example, by a cup anemometer and a wind vane, and evaluate the implied relationship between the scalar and vector wind speeds, using measurements taken under low-wind stable conditions. We highlight several inconsistencies inherent in the existing formulations and show that the widely-used assumption that the lateral velocity variance is equal to the longitudinal velocity variance is not necessarily true. We derive improved relations for the two variances, and although data under stable stratification are considered for comparison, our analysis is applicable more generally.
Analysis of Variance Components for Genetic Markers with Unphased Genotypes.
Wang, Tao
2016-01-01
An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions.
Increased spatial variance accompanies reorganization of two continental shelf ecosystems.
Litzow, Michael A; Urban, J Daniel; Laurel, Benjamin J
2008-09-01
Phase transitions between alternate stable states in marine ecosystems lead to disruptive changes in ecosystem services, especially fisheries productivity. We used trawl survey data spanning phase transitions in the North Pacific (Gulf of Alaska) and the North Atlantic (Scotian Shelf) to test for increases in ecosystem variability that might provide early warning of such transitions. In both time series, elevated spatial variability in a measure of community composition (ratio of cod [Gadus sp.] abundance to prey abundance) accompanied transitions between ecosystem states, and variability was negatively correlated with distance from the ecosystem transition point. In the Gulf of Alaska, where the phase transition was apparently the result of a sudden perturbation (climate regime shift), variance increased one year before the transition in mean state occurred. On the Scotian Shelf, where ecosystem reorganization was the result of persistent overfishing, a significant increase in variance occurred three years before the transition in mean state was detected. However, we could not reject the alternate explanation that increased variance may also have simply been inherent to the final stable state in that ecosystem. Increased variance has been previously observed around transition points in models, but rarely in real ecosystems, and our results demonstrate the possible management value in tracking the variance of key parameters in exploited ecosystems.
Saturation of number variance in embedded random-matrix ensembles.
Prakash, Ravi; Pandey, Akhilesh
2016-05-01
We study fluctuation properties of embedded random matrix ensembles of noninteracting particles. For ensemble of two noninteracting particle systems, we find that unlike the spectra of classical random matrices, correlation functions are nonstationary. In the locally stationary region of spectra, we study the number variance and the spacing distributions. The spacing distributions follow the Poisson statistics, which is a key behavior of uncorrelated spectra. The number variance varies linearly as in the Poisson case for short correlation lengths but a kind of regularization occurs for large correlation lengths, and the number variance approaches saturation values. These results are known in the study of integrable systems but are being demonstrated for the first time in random matrix theory. We conjecture that the interacting particle cases, which exhibit the characteristics of classical random matrices for short correlation lengths, will also show saturation effects for large correlation lengths.
Monte Carlo variance reduction approaches for non-Boltzmann tallies
Booth, T.E.
1992-12-01
Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed.
Impact of Damping Uncertainty on SEA Model Response Variance
NASA Technical Reports Server (NTRS)
Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand
2010-01-01
Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.
The mean and variance of phylogenetic diversity under rarefaction.
Nipperess, David A; Matsen, Frederick A
2013-06-01
Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.
Bottleneck Effects on Genetic Variance for Courtship Repertoire
Meffert, L. M.
1995-01-01
Bottleneck effects on evolutionary potential in mating behavior were addressed through assays of additive genetic variances and resulting phenotypic responses to drift in the courtship repertoires of six two-pair founder-flush lines and two control populations of the housefly. A simulation addressed the complication that an estimate of the genetic variance for a courtship trait (e.g., male performance vigor or the female requirement for copulation) must involve assays against the background behavior of the mating partners. The additive ``environmental'' effect of the mating partner's phenotype simply dilutes the net parent-offspring covariance for a trait. However, if there is an interaction with this ``environmental'' component, negative parent-offspring covariances can result under conditions of high incompatibility between the population's distributions for male performance and female choice requirements, despite high levels of genetic variance. All six bottlenecked lines exhibited significant differentiation from the controls in at least one measure of the parent-offspring covariance for male performance or female choice (estimated by 50 parent-son and 50 parent-daughter covariances for 10 courtship traits per line) which translated to significant phenotypic drift. However, the average effect across traits or across lines did not yield a significant net increase in genetic variance due to bottlenecks. Concerted phenotypic differentiation due to the founder-flush event provided indirect evidence of directional dominance in a subset of traits. Furthermore, indirect evidence of genotype-environment interactions (potentially producing genotype-genotype effects) was found in the negative parent-offspring covariances predicted by the male-female interaction simulation and by the association of the magnitude of phenotypic drift with the absolute value of the parent-offspring covariance. Hence, nonadditive genetic effects on mating behavior may be important in
The dynamic Allan Variance IV: characterization of atomic clock anomalies.
Galleani, Lorenzo; Tavella, Patrizia
2015-05-01
The number of applications where precise clocks play a key role is steadily increasing, satellite navigation being the main example. Precise clock anomalies are hence critical events, and their characterization is a fundamental problem. When an anomaly occurs, the clock stability changes with time, and this variation can be characterized with the dynamic Allan variance (DAVAR). We obtain the DAVAR for a series of common clock anomalies, namely, a sinusoidal term, a phase jump, a frequency jump, and a sudden change in the clock noise variance. These anomalies are particularly common in space clocks. Our analytic results clarify how the clock stability changes during these anomalies.
Entropy, Fisher Information and Variance with Frost-Musulin Potenial
NASA Astrophysics Data System (ADS)
Idiodi, J. O. A.; Onate, C. A.
2016-09-01
This study presents the Shannon and Renyi information entropy for both position and momentum space and the Fisher information for the position-dependent mass Schrödinger equation with the Frost-Musulin potential. The analysis of the quantum mechanical probability has been obtained via the Fisher information. The variance information of this potential is equally computed. This controls both the chemical properties and physical properties of some of the molecular systems. We have observed the behaviour of the Shannon entropy. Renyi entropy, Fisher information and variance with the quantum number n respectively.
Studying Variance in the Galactic Ultra-compact Binary Population
NASA Astrophysics Data System (ADS)
Larson, Shane; Breivik, Katelyn
2017-01-01
In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations on week-long timescales, thus allowing a full exploration of the variance associated with a binary stellar evolution model.
The principle of stationary variance in quantum field theory
NASA Astrophysics Data System (ADS)
Siringo, Fabio
2014-02-01
The principle of stationary variance is advocated as a viable variational approach to quantum field theory (QFT). The method is based on the principle that the variance of energy should be at its minimum when the state of a quantum system reaches its best approximation for an eigenstate. While not too much popular in quantum mechanics (QM), the method is shown to be valuable in QFT and three special examples are given in very different areas ranging from Heisenberg model of antiferromagnetism (AF) to quantum electrodynamics (QED) and gauge theories.
The Probabilities of Unique Events
2012-08-30
probabilities into quantum mechanics, and some psychologists have argued that they have a role to play in accounting for errors in judgment [30]. But, in...Discussion The mechanisms underlying naive estimates of the probabilities of unique events are largely inaccessible to consciousness , but they...Can quantum probability provide a new direc- tion for cognitive modeling? Behavioral and Brain Sciences (in press). 31. Paolacci G, Chandler J
Unique children in unique places: innovative pediatric community clinical.
Harrison, Suzanne; Laforest, Marie-Eve
2011-12-01
Pediatric nursing is a specialization that requires a particular set of skills and abilities. Most nurses seldom get the chance to interact with families who have children living with exceptionalities unless they choose to work in tertiary settings dealing exclusively with children. This article explores how one school of nursing in Canada offers its students two unique learning opportunities where they get the chance to work with children who have special needs in an interdisciplinary community-based setting. Shared statements from parents and students highlight the benefits to all those involved.
Supersymmetry of AdS and flat IIB backgrounds
NASA Astrophysics Data System (ADS)
Beck, S.; Gutowski, J.; Papadopoulos, G.
2015-02-01
We present a systematic description of all warped AdS n × w M 10- n and IIB backgrounds and identify the a priori number of supersymmetries N preserved by these solutions. In particular, we find that the AdS n backgrounds preserve for n ≤ 4 and for 4 < n ≤ 6 supersymmetries and for suitably restricted. In addition under some assumptions required for the applicability of the maximum principle, we demonstrate that the Killing spinors of AdS n backgrounds can be identified with the zero modes of Dirac-like operators on M 10- n establishing a new class of Lichnerowicz type theorems. Furthermore, we adapt some of these results to backgrounds with fluxes by taking the AdS radius to infinity. We find that these backgrounds preserve for 2 < n ≤ 4 and for 4 < n ≤ 7 supersymmetries. We also demonstrate that the Killing spinors of AdS n × w M 10- n do not factorize into Killing spinors on AdS n and Killing spinors on M 10- n .
[Value-Added--Adding Economic Value in the Food Industry].
ERIC Educational Resources Information Center
Welch, Mary A., Ed.
1989-01-01
This booklet focuses on the economic concept of "value added" to goods and services. A student activity worksheet illustrates how the steps involved in processing food are examples of the concept of value added. The booklet further links food processing to the idea of value added to the Gross National Product (GNP). Discussion questions,…
Unique Applications for Artificial Neural Networks. Phase 1
1991-08-08
AD-A243 365’ l!1111iLI[li In M aR C ’ PHASE I FINAL REPORT Unique Applications for Artificial Neural Networks DARPA SBIR 90-115 Contract # DAAH01-91...Contents Unique Applications for Artificial Neural Networks Acknowledgments Table of Contents Abstract i 1.0 Introduction 1 2.0 The NGO-VRP Solver 2...34 solution is thus obtained through analogy. Because of this activity, artificial neural networks have emerged as a primary artificial intelligence
10 CFR 52.93 - Exemptions and variances.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 2 2010-01-01 2010-01-01 false Exemptions and variances. 52.93 Section 52.93 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS... referencing a nuclear power reactor manufactured under a manufacturing license issued under subpart F of...
10 CFR 52.93 - Exemptions and variances.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 2 2011-01-01 2011-01-01 false Exemptions and variances. 52.93 Section 52.93 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS... referencing a nuclear power reactor manufactured under a manufacturing license issued under subpart F of...
10 CFR 52.93 - Exemptions and variances.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 2 2014-01-01 2014-01-01 false Exemptions and variances. 52.93 Section 52.93 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS... referencing a nuclear power reactor manufactured under a manufacturing license issued under subpart F of...
10 CFR 52.93 - Exemptions and variances.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 2 2013-01-01 2013-01-01 false Exemptions and variances. 52.93 Section 52.93 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS... referencing a nuclear power reactor manufactured under a manufacturing license issued under subpart F of...
10 CFR 52.93 - Exemptions and variances.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 2 2012-01-01 2012-01-01 false Exemptions and variances. 52.93 Section 52.93 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSES, CERTIFICATIONS, AND APPROVALS FOR NUCLEAR POWER PLANTS... referencing a nuclear power reactor manufactured under a manufacturing license issued under subpart F of...
Variance Components for NLS: Partitioning the Design Effect.
ERIC Educational Resources Information Center
Folsom, Ralph E., Jr.
This memorandum demonstrates a variance components methodology for partitioning the overall design effect (D) for a ratio mean into stratification (S), unequal weighting (W), and clustering (C) effects, so that D = WSC. In section 2, a sample selection scheme modeled after the National Longitudinal Study of the High School Class of 1972 (NKS)…
Allan Variance Calculation for Nonuniformly Spaced Input Data
2015-01-01
Approved for public release; distribution is unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT The Allan Variance ( AV ) characterizes the...temporal randomness in sensor output data streams at various times scales. The conventional formula for calculating the AV assumes that the data...presents a modified approach to AV calculation, which accommodates nonuniformly spaced time samples. The basic concept of the modified approach is
Variance in Math Achievement Attributable to Visual Cognitive Constructs
ERIC Educational Resources Information Center
Oehlert, Jeremy J.
2012-01-01
Previous research has reported positive correlations between math achievement and the cognitive constructs of spatial visualization, working memory, and general intelligence; however, no single study has assessed variance in math achievement attributable to all three constructs, examined in combination. The current study fills this gap in the…
Temporal Relation Extraction in Outcome Variances of Clinical Pathways.
Yamashita, Takanori; Wakata, Yoshifumi; Hamai, Satoshi; Nakashima, Yasuharu; Iwamoto, Yukihide; Franagan, Brendan; Nakashima, Naoki; Hirokawa, Sachio
2015-01-01
Recently the clinical pathway has progressed with digitalization and the analysis of activity. There are many previous studies on the clinical pathway but not many feed directly into medical practice. We constructed a mind map system that applies the spanning tree. This system can visualize temporal relations in outcome variances, and indicate outcomes that affect long-term hospitalization.
40 CFR 142.43 - Disposition of a variance request.
Code of Federal Regulations, 2010 CFR
2010-07-01
....43 Section 142.43 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the... issue a denial. Such notice shall include a statement of reasons for the proposed denial, and...
Numbers Of Degrees Of Freedom Of Allan-Variance Estimators
NASA Technical Reports Server (NTRS)
Greenhall, Charles A.
1992-01-01
Report discusses formulas for estimation of Allan variances. Presents algorithms for closed-form approximations of numbers of degrees of freedom characterizing results obtained when various estimators applied to five power-law components of classical mathematical model of clock noise.
Code of Federal Regulations, 2013 CFR
2013-07-01
... PUBLIC CONTRACTS, DEPARTMENT OF LABOR 204-SAFETY AND HEALTH STANDARDS FOR FEDERAL SUPPLY CONTRACTS Scope... Public Contracts Act and the Occupational Safety and Health Act of 1970. ... 41 Public Contracts and Property Management 1 2013-07-01 2013-07-01 false Variances....
Code of Federal Regulations, 2014 CFR
2014-07-01
... PUBLIC CONTRACTS, DEPARTMENT OF LABOR 204-SAFETY AND HEALTH STANDARDS FOR FEDERAL SUPPLY CONTRACTS Scope... Public Contracts Act and the Occupational Safety and Health Act of 1970. ... 41 Public Contracts and Property Management 1 2014-07-01 2014-07-01 false Variances....
Code of Federal Regulations, 2010 CFR
2010-07-01
... PUBLIC CONTRACTS, DEPARTMENT OF LABOR 204-SAFETY AND HEALTH STANDARDS FOR FEDERAL SUPPLY CONTRACTS Scope... Public Contracts Act and the Occupational Safety and Health Act of 1970. ... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Variances....
Code of Federal Regulations, 2012 CFR
2012-07-01
... PUBLIC CONTRACTS, DEPARTMENT OF LABOR 204-SAFETY AND HEALTH STANDARDS FOR FEDERAL SUPPLY CONTRACTS Scope... Public Contracts Act and the Occupational Safety and Health Act of 1970. ... 41 Public Contracts and Property Management 1 2012-07-01 2009-07-01 true Variances....
The Variance of Intraclass Correlations in Three and Four Level
ERIC Educational Resources Information Center
Hedges, Larry V.; Hedberg, Eric C.; Kuyper, Arend M.
2012-01-01
Intraclass correlations are used to summarize the variance decomposition in popula- tions with multilevel hierarchical structure. There has recently been considerable interest in estimating intraclass correlations from surveys or designed experiments to provide design parameters for planning future large-scale randomized experiments. The large…
Genetic Variance in the SES-IQ Correlation.
ERIC Educational Resources Information Center
Eckland, Bruce K.
1979-01-01
Discusses questions dealing with genetic aspects of the correlation between IQ and socioeconomic status (SES). Questions include: How does assortative mating affect the genetic variance of IQ? Is the relationship between an individual's IQ and adult SES a causal one? And how can IQ research improve schools and schooling? (Author/DB)
40 CFR 190.11 - Variances for unusual operations.
Code of Federal Regulations, 2013 CFR
2013-07-01
....11 Section 190.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified...
40 CFR 190.11 - Variances for unusual operations.
Code of Federal Regulations, 2012 CFR
2012-07-01
....11 Section 190.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified...
40 CFR 190.11 - Variances for unusual operations.
Code of Federal Regulations, 2011 CFR
2011-07-01
....11 Section 190.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified...
40 CFR 190.11 - Variances for unusual operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
....11 Section 190.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified...
40 CFR 190.11 - Variances for unusual operations.
Code of Federal Regulations, 2014 CFR
2014-07-01
....11 Section 190.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified...
21 CFR 821.2 - Exemptions and variances.
Code of Federal Regulations, 2011 CFR
2011-04-01
... and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES MEDICAL DEVICE TRACKING REQUIREMENTS General Provisions § 821.2 Exemptions and variances. (a) A... following: (1) The name of the device and device class and representative labeling showing the intended...
21 CFR 821.2 - Exemptions and variances.
Code of Federal Regulations, 2014 CFR
2014-04-01
... and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES MEDICAL DEVICE TRACKING REQUIREMENTS General Provisions § 821.2 Exemptions and variances. (a) A... following: (1) The name of the device and device class and representative labeling showing the intended...
21 CFR 821.2 - Exemptions and variances.
Code of Federal Regulations, 2010 CFR
2010-04-01
... and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES MEDICAL DEVICE TRACKING REQUIREMENTS General Provisions § 821.2 Exemptions and variances. (a) A... following: (1) The name of the device and device class and representative labeling showing the intended...
Dominance, Information, and Hierarchical Scaling of Variance Space.
ERIC Educational Resources Information Center
Ceurvorst, Robert W.; Krus, David J.
1979-01-01
A method for computation of dominance relations and for construction of their corresponding hierarchical structures is presented. The link between dominance and variance allows integration of the mathematical theory of information with least squares statistical procedures without recourse to logarithmic transformations of the data. (Author/CTM)
Explaining Common Variance Shared by Early Numeracy and Literacy
ERIC Educational Resources Information Center
Davidse, N. J.; De Jong, M. T.; Bus, A. G.
2014-01-01
How can it be explained that early literacy and numeracy share variance? We specifically tested whether the correlation between four early literacy skills (rhyming, letter knowledge, emergent writing, and orthographic knowledge) and simple sums (non-symbolic and story condition) reduced after taking into account preschool attention control,…
The Threat of Common Method Variance Bias to Theory Building
ERIC Educational Resources Information Center
Reio, Thomas G., Jr.
2010-01-01
The need for more theory building scholarship remains one of the pressing issues in the field of HRD. Researchers can employ quantitative, qualitative, and/or mixed methods to support vital theory-building efforts, understanding however that each approach has its limitations. The purpose of this article is to explore common method variance bias as…
Analysis of Variance: What Is Your Statistical Software Actually Doing?
ERIC Educational Resources Information Center
Li, Jian; Lomax, Richard G.
2011-01-01
Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…
40 CFR 52.1390 - Missoula variance provision.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 4 2014-07-01 2014-07-01 false Missoula variance provision. 52.1390 Section 52.1390 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS... from any requirement of an applicable implementation plan with respect to a stationary source....
Comparison of Turbulent Thermal Diffusivity and Scalar Variance Models
NASA Technical Reports Server (NTRS)
Yoder, Dennis A.
2016-01-01
In this study, several variable turbulent Prandtl number formulations are examined for boundary layers, pipe flow, and axisymmetric jets. The model formulations include simple algebraic relations between the thermal diffusivity and turbulent viscosity as well as more complex models that solve transport equations for the thermal variance and its dissipation rate. Results are compared with available data for wall heat transfer and profile measurements of mean temperature, the root-mean-square (RMS) fluctuating temperature, turbulent heat flux and turbulent Prandtl number. For wall-bounded problems, the algebraic models are found to best predict the rise in turbulent Prandtl number near the wall as well as the log-layer temperature profile, while the thermal variance models provide a good representation of the RMS temperature fluctuations. In jet flows, the algebraic models provide no benefit over a constant turbulent Prandtl number approach. Application of the thermal variance models finds that some significantly overpredict the temperature variance in the plume and most underpredict the thermal growth rate of the jet. The models yield very similar fluctuating temperature intensities in jets from straight pipes and smooth contraction nozzles, in contrast to data that indicate the latter should have noticeably higher values. For the particular low subsonic heated jet cases examined, changes in the turbulent Prandtl number had no effect on the centerline velocity decay.
Intuitive Analysis of Variance-- A Formative Assessment Approach
ERIC Educational Resources Information Center
Trumpower, David
2013-01-01
This article describes an assessment activity that can show students how much they intuitively understand about statistics, but also alert them to common misunderstandings. How the activity can be used formatively to help improve students' conceptual understanding of analysis of variance is discussed. (Contains 1 figure and 1 table.)
Unbiased Estimates of Variance Components with Bootstrap Procedures
ERIC Educational Resources Information Center
Brennan, Robert L.
2007-01-01
This article provides general procedures for obtaining unbiased estimates of variance components for any random-model balanced design under any bootstrap sampling plan, with the focus on designs of the type typically used in generalizability theory. The results reported here are particularly helpful when the bootstrap is used to estimate standard…
40 CFR 124.64 - Appeals of variances.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 124.64 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS PROCEDURES...) When a State issues a permit on which EPA has made a variance decision, separate appeals of the State... issues in both proceedings, the Regional Administrator will decide, in consultation with State...
Exploratory Multivariate Analysis of Variance: Contrasts and Variables.
ERIC Educational Resources Information Center
Barcikowski, Robert S.; Elliott, Ronald S.
The contribution of individual variables to overall multivariate significance in a multivariate analysis of variance (MANOVA) is investigated using a combination of canonical discriminant analysis and Roy-Bose simultaneous confidence intervals. Difficulties with this procedure are discussed, and its advantages are illustrated using examples based…
20 CFR 901.40 - Proof; variance; amendment of pleadings.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Proof; variance; amendment of pleadings. 901.40 Section 901.40 Employees' Benefits JOINT BOARD FOR THE ENROLLMENT OF ACTUARIES REGULATIONS GOVERNING THE PERFORMANCE OF ACTUARIAL SERVICES UNDER THE EMPLOYEE RETIREMENT INCOME SECURITY ACT OF...
40 CFR 142.43 - Disposition of a variance request.
Code of Federal Regulations, 2011 CFR
2011-07-01
....43 Section 142.43 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the... issue a denial. Such notice shall include a statement of reasons for the proposed denial, and...
36 CFR 30.5 - Variances, exceptions, and use permits.
Code of Federal Regulations, 2010 CFR
2010-07-01
... OF THE INTERIOR WHISKEYTOWN-SHASTA-TRINITY NATIONAL RECREATION AREA: ZONING STANDARDS FOR WHISKEYTOWN UNIT § 30.5 Variances, exceptions, and use permits. (a) Zoning ordinances or amendments thereto, for the zoning districts comprising the Whiskeytown Unit of the Whiskeytown-Shasta-Trinity...
44 CFR 60.6 - Variances and exceptions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... pattern inconsistent with the objectives of sound flood plain management, the Federal Insurance... (i) a showing of good and sufficient cause, (ii) a determination that failure to grant the variance... public expense, create nuisances, cause fraud on or victimization of the public, or conflict...
44 CFR 60.6 - Variances and exceptions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... pattern inconsistent with the objectives of sound flood plain management, the Federal Insurance... (i) a showing of good and sufficient cause, (ii) a determination that failure to grant the variance... public expense, create nuisances, cause fraud on or victimization of the public, or conflict...
AdS3 Solutions of IIB Supergravity
Kim, Nakwoo
2005-12-02
We consider pure D3-brane configurations of IIB string theory which lead to supergravity solutions containing an AdS3 factor. They can provide new examples of AdS3/CFT2 examples on D3-branes whose worldvolume is partially compactified. When the internal 7 dimensional space is non-compact, they are related to fluctuations of higher dimensional AdS/CFT duality examples, thus dual to the BPS operators of D = 4 superconformal field theories. We find that supersymmetry requires the 7 dimensional space is warped Hopf-fibration of (real) 6 dimensional Kahler manifolds.
29 CFR 1905.11 - Variances and other relief under section 6(d).
Code of Federal Regulations, 2010 CFR
2010-07-01
... ADMINISTRATION, DEPARTMENT OF LABOR RULES OF PRACTICE FOR VARIANCES, LIMITATIONS, VARIATIONS, TOLERANCES, AND..., Limitations, Variations, Tolerances, Exemptions and Other Relief § 1905.11 Variances and other relief...
Mucormycosis in India: unique features.
Chakrabarti, Arunaloke; Singh, Rachna
2014-12-01
Mucormycosis remains a devastating invasive fungal infection, with high mortality rates even after active management. The disease is being reported at an alarming frequency over the past decades from India. Indian mucormycosis has certain unique features. Rhino-orbito-cerebral presentation associated with uncontrolled diabetes is the predominant characteristic. Isolated renal mucormycosis has emerged as a new clinical entity. Apophysomyces elegans and Rhizopus homothallicus are emerging species in this region and uncommon agents such as Mucor irregularis and Thamnostylum lucknowense are also being reported. This review focuses on these distinct features of mucormycosis observed in India.
Lithium nephropathy: unique sonographic findings.
Di Salvo, Donald N; Park, Joseph; Laing, Faye C
2012-04-01
This case series describes a unique sonographic appearance consisting of numerous microcysts and punctate echogenic foci seen on renal sonograms of 10 adult patients receiving chronic lithium therapy. Clinically, chronic renal insufficiency was present in 6 and nephrogenic diabetes insipidus in 2. Sonography showed numerous microcysts and punctate echogenic foci. Computed tomography in 5 patients confirmed microcysts and microcalcifications, which were fewer in number than on sonography. Magnetic resonance imaging in 2 patients confirmed microcysts in each case. Renal biopsy in 1 patient showed chronic interstitial nephritis, microcysts, and tubular dilatation. The diagnosis of lithium nephropathy should be considered when sonography shows these findings.
A unique solar marking construct.
Sofaer, A; Zinser, V; Sinclair, R M
1979-10-19
An assembly of stone slabs on an isolated butte in New Mexico collimates sunlight onto spiral petroglyphs carved on a cliff face. The light illuminates the spirals in a changing pattern throughout the year and marks the solstices and equinoxes with particular images. The assembly can also be used to observe lunar phenomena. It is unique in archeoastronomy in utilizing the changing height of the midday sun throughout the year rather than its rising and setting points. The construct appears to be the result of deliberate work of the Anasazi Indians, the builders of the great pueblos in the area.
Gravity Wave Variances and Propagation Derived from AIRS Radiances
NASA Technical Reports Server (NTRS)
Gong, Jie; Wu, Dong L.; Eckermann, S. D.
2012-01-01
As the first gravity wave (GW) climatology study using nadir-viewing infrared sounders, 50 Atmospheric Infrared Sounder (AIRS) radiance channels are selected to estimate GW variances at pressure levels between 2-100 hPa. The GW variance for each scan in the cross-track direction is derived from radiance perturbations in the scan, independently of adjacent scans along the orbit. Since the scanning swaths are perpendicular to the satellite orbits, which are inclined meridionally at most latitudes, the zonal component of GW propagation can be inferred by differencing the variances derived between the westmost and the eastmost viewing angles. Consistent with previous GW studies using various satellite instruments, monthly mean AIRS variance shows large enhancements over meridionally oriented mountain ranges as well as some islands at winter hemisphere high latitudes. Enhanced wave activities are also found above tropical deep convective regions. GWs prefer to propagate westward above mountain ranges, and eastward above deep convection. AIRS 90 field-of-views (FOVs), ranging from +48 deg. to -48 deg. off nadir, can detect large-amplitude GWs with a phase velocity propagating preferentially at steep angles (e.g., those from orographic and convective sources). The annual cycle dominates the GW variances and the preferred propagation directions for all latitudes. Indication of a weak two-year variation in the tropics is found, which is presumably related to the Quasi-biennial oscillation (QBO). AIRS geometry makes its out-tracks capable of detecting GWs with vertical wavelengths substantially shorter than the thickness of instrument weighting functions. The novel discovery of AIRS capability of observing shallow inertia GWs will expand the potential of satellite GW remote sensing and provide further constraints on the GW drag parameterization schemes in the general circulation models (GCMs).
Hydrograph variances over different timescales in hydropower production networks
NASA Astrophysics Data System (ADS)
Zmijewski, Nicholas; Wörman, Anders
2016-08-01
The operation of water reservoirs involves a spectrum of timescales based on the distribution of stream flow travel times between reservoirs, as well as the technical, environmental, and social constraints imposed on the operation. In this research, a hydrodynamically based description of the flow between hydropower stations was implemented to study the relative importance of wave diffusion on the spectrum of hydrograph variance in a regulated watershed. Using spectral decomposition of the effluence hydrograph of a watershed, an exact expression of the variance in the outflow response was derived, as a function of the trends of hydraulic and geomorphologic dispersion and management of production and reservoirs. We show that the power spectra of involved time-series follow nearly fractal patterns, which facilitates examination of the relative importance of wave diffusion and possible changes in production demand on the outflow spectrum. The exact spectral solution can also identify statistical bounds of future demand patterns due to limitations in storage capacity. The impact of the hydraulic description of the stream flow on the reservoir discharge was examined for a given power demand in River Dalälven, Sweden, as function of a stream flow Peclet number. The regulation of hydropower production on the River Dalälven generally increased the short-term variance in the effluence hydrograph, whereas wave diffusion decreased the short-term variance over periods of <1 week, depending on the Peclet number (Pe) of the stream reach. This implies that flow variance becomes more erratic (closer to white noise) as a result of current production objectives.
Variance in the reproductive success of dominant male mountain gorillas.
Robbins, Andrew M; Gray, Maryke; Uwingeli, Prosper; Mburanumwe, Innocent; Kagoda, Edwin; Robbins, Martha M
2014-10-01
Using 30 years of demographic data from 15 groups, this study estimates how harem size, female fertility, and offspring survival may contribute to variance in the siring rates of dominant male mountain gorillas throughout the Virunga Volcano Region. As predicted for polygynous species, differences in harem size were the greatest source of variance in the siring rate, whereas differences in female fertility and offspring survival were relatively minor. Harem size was positively correlated with offspring survival, even after removing all known and suspected cases of infanticide, so the correlation does not seem to reflect differences in the ability of males to protect their offspring. Harem size was not significantly correlated with female fertility, which is consistent with the hypothesis that mountain gorillas have minimal feeding competition. Harem size, offspring survival, and siring rates were not significantly correlated with the proportion of dominant tenures that occurred in multimale groups versus one-male groups; even though infanticide is less likely when those tenures end in multimale groups than one-male groups. In contrast with the relatively small contribution of offspring survival to variance in the siring rates of this study, offspring survival is a major source of variance in the male reproductive success of western gorillas, which have greater predation risks and significantly higher rates of infanticide. If differences in offspring protection are less important among male mountain gorillas than western gorillas, then the relative importance of other factors may be greater for mountain gorillas. Thus, our study illustrates how variance in male reproductive success and its components can differ between closely related species.
Action growth for AdS black holes
NASA Astrophysics Data System (ADS)
Cai, Rong-Gen; Ruan, Shan-Ming; Wang, Shao-Jiang; Yang, Run-Qiu; Peng, Rong-Hui
2016-09-01
Recently a Complexity-Action (CA) duality conjecture has been proposed, which relates the quantum complexity of a holographic boundary state to the action of a Wheeler-DeWitt (WDW) patch in the anti-de Sitter (AdS) bulk. In this paper we further investigate the duality conjecture for stationary AdS black holes and derive some exact results for the growth rate of action within the Wheeler-DeWitt (WDW) patch at late time approximation, which is supposed to be dual to the growth rate of quantum complexity of holographic state. Based on the results from the general D-dimensional Reissner-Nordström (RN)-AdS black hole, rotating/charged Bañados-Teitelboim-Zanelli (BTZ) black hole, Kerr-AdS black hole and charged Gauss-Bonnet-AdS black hole, we present a universal formula for the action growth expressed in terms of some thermodynamical quantities associated with the outer and inner horizons of the AdS black holes. And we leave the conjecture unchanged that the stationary AdS black hole in Einstein gravity is the fastest computer in nature.
ERIC Educational Resources Information Center
Starns, Jeffrey J.; Rotello, Caren M.; Hautus, Michael J.
2014-01-01
We tested the dual process and unequal variance signal detection models by jointly modeling recognition and source confidence ratings. The 2 approaches make unique predictions for the slope of the recognition memory zROC function for items with correct versus incorrect source decisions. The standard bivariate Gaussian version of the unequal…
ERIC Educational Resources Information Center
DeVito, Pasquale John
To investigate the effects of Title I reading programs and the relationships of relevant sets of variables to student achievement, this study sought to determine the unique, and the common, contributions of background, mental ability, program, and parental involvement to the variance in reading comprehension and vocabulary scores for Title I…
Value Added in English Schools
ERIC Educational Resources Information Center
Ray, Andrew; McCormack, Tanya; Evans, Helen
2009-01-01
Value-added indicators are now a central part of school accountability in England, and value-added information is routinely used in school improvement at both the national and the local levels. This article describes the value-added models that are being used in the academic year 2007-8 by schools, parents, school inspectors, and other…
Constructing the AdS dual of a Fermi liquid: AdS black holes with Dirac hair
NASA Astrophysics Data System (ADS)
Čubrović, Mihailo; Zaanen, Jan; Schalm, Koenraad
2011-10-01
We provide evidence that the holographic dual to a strongly coupled charged Fermi liquid has a non-zero fermion density in the bulk. We show that the pole-strength of the stable quasiparticle characterizing the Fermi surface is encoded in the AdS probability density of a single normalizable fermion wavefunction in AdS. Recalling Migdal's theorem which relates the pole strength to the Fermi-Dirac characteristic discontinuity in the number density at ω F , we conclude that the AdS dual of a Fermi liquid is described by occupied on-shell fermionic modes in AdS. Encoding the occupied levels in the total spatially averaged probability density of the fermion field directly, we show that an AdS Reissner-Nordström black holein a theory with charged fermions has a critical temperature, at which the system undergoes a first-order transition to a black hole with a non-vanishing profile for the bulk fermion field. Thermodynamics and spectral analysis support that the solution with non-zero AdS fermion-profile is the preferred ground state at low temperatures.
NASA Astrophysics Data System (ADS)
Clark, T. E.; ter Veldhuis, T.
2016-11-01
Coset methods are used to determine the action of a co-dimension one brane (domain wall) embedded in (d + 1)-dimensional AdS space in the Carroll limit in which the speed of light goes to zero. The action is invariant under the non-linearly realized symmetries of the AdS-Carroll spacetime. The Nambu-Goldstone field exhibits a static spatial distribution for the brane with a time varying momentum density related to the brane's spatial shape as well as the AdS-C geometry. The AdS-C vector field dual theory is obtained.
ADS Based on Linear Accelerators
NASA Astrophysics Data System (ADS)
Pan, Weimin; Dai, Jianping
An accelerator-driven system (ADS), which combines a particle accelerator with a subcritical core, is commonly regarded as a promising device for the transmutation of nuclear waste, as well as a potential scheme for thorium-based energy production. So far the predominant choice of the accelerator for ADS is a superconducting linear accelerator (linac). This article gives a brief overview of ADS based on linacs, including the motivation, principle, challenges and research activities around the world. The status and future plan of the Chinease ADS (C-ADS) project will be highlighted and discussed in depth as an example.
AdS spacetimes from wrapped D3-branes
NASA Astrophysics Data System (ADS)
Gauntlett, Jerome P.; MacConamhna, Oisín A. P.
2007-12-01
We derive a geometrical characterization of a large class of AdS3 and AdS2 supersymmetric spacetimes in type IIB supergravity with non-vanishing five-form flux using G-structures. These are obtained as special cases of a class of supersymmetric spacetimes with an {{\\bb R}}^{1,1} or {{\\bb R}} (time) factor that are associated with D3 branes wrapping calibrated two or three cycles, respectively, in manifolds with SU(2), SU(3), SU(4) and G2 holonomy. We show how two explicit AdS solutions, previously constructed in gauged supergravity, satisfy our more general G-structure conditions. For each explicit solution, we also derive a special holonomy metric which, although singular, has an appropriate calibrated cycle. After analytic continuation, some of the classes of AdS spacetimes give rise to known classes of BPS bubble solutions with {{\\bb R}}\\times {\\it SO}(4)\\times {\\it SO}(4), {{\\bb R}}\\times {\\it SO}(4)\\times U(1) and {{\\bb R}}\\times {\\it SO}(4) symmetry. These have 1/2, 1/4 and 1/8 supersymmetry, respectively. We present a new class of 1/8 BPS geometries with {{\\bb R}}\\times {\\it SU}(2) symmetry, obtained by analytic continuation of the class of AdS spacetimes associated with D3-brane wrapped on associative three cycles.
Unique features of space reactors
Buden, D.
1990-01-01
Space reactors are designed to meet a unique set of requirements; they must be sufficiently compact to be launched in a rocket to their operational location, operate for many years without maintenance and servicing, operate in extreme environments, and reject heat by radiation to space. To meet these restrictions, operating temperatures are much greater than in terrestrial power plants, and the reactors tend to have a fast neutron spectrum. Currently, a new generation of space reactor power plants is being developed. The major effort is in the SP-100 program, where the power plant is being designed for seven years of full power, and no maintenance operation at a reactor outlet operating temperature of 1350 K. 8 refs., 3 figs., 1 tab.
Revisiting the thermodynamic relations in AdS /CMT models
NASA Astrophysics Data System (ADS)
Hyun, Seungjoon; Park, Sang-A.; Yi, Sang-Heon
2017-03-01
Motivated by the recent unified approach to the Smarr-like relation of anti-de Sitter (AdS) planar black holes in conjunction with the quasilocal formalism on conserved charges, we revisit the quantum statistical and thermodynamic relations of hairy AdS planar black holes. By extending the previous results, we identify the hairy contribution in the bulk and show that the holographic computation can be improved so that it is consistent with the bulk computation. We argue that the first law can be retained in its universal form and that the relation between the on-shell renormalized Euclidean action and its free energy interpretation in gravity may also be undeformed even with the hairy contribution in hairy AdS black holes.
Entanglement entropy for free scalar fields in AdS
NASA Astrophysics Data System (ADS)
Sugishita, Sotaro
2016-09-01
We compute entanglement entropy for free massive scalar fields in anti-de Sitter (AdS) space. The entangling surface is a minimal surface whose boundary is a sphere at the boundary of AdS. The entropy can be evaluated from the thermal free energy of the fields on a topological black hole by using the replica method. In odd-dimensional AdS, exact expressions of the Rényi entropy S n are obtained for arbitrary n. We also evaluate 1-loop corrections coming from the scalar fields to holographic entanglement entropy. Applying the results, we compute the leading difference of entanglement entropy between two holographic CFTs related by a renormalization group flow triggered by a double trace deformation. The difference is proportional to the shift of a central charge under the flow.
Solutions of free higher spins in AdS
NASA Astrophysics Data System (ADS)
Lü, H.; Shao, Kai-Nan
2011-11-01
We consider free massive and massless higher integer spins in AdS backgrounds in general D dimensions. We obtain the solutions corresponding to the highest-weight state of the spin-ℓ representations of the SO (2 , D - 1) isometry groups. The solution for the spin-ℓ field is expressed recursively in terms of that for the spin- (ℓ - 1). Thus starting from the explicit spin-0, all the higher-spin solutions can be obtained. These solutions allow us to derive the generalized Breitenlohner-Freedman bound, and analyze the asymptotic falloffs. In particular, solutions with negative mass square in general have falloffs slower than those of the Schwarzschild AdS black holes in the AdS boundaries.
The Milieu Intérieur study - an integrative approach for study of human immunological variance.
Thomas, Stéphanie; Rouilly, Vincent; Patin, Etienne; Alanio, Cécile; Dubois, Annick; Delval, Cécile; Marquier, Louis-Guillaume; Fauchoux, Nicolas; Sayegrih, Seloua; Vray, Muriel; Duffy, Darragh; Quintana-Murci, Lluis; Albert, Matthew L
2015-04-01
The Milieu Intérieur Consortium has established a 1000-person healthy population-based study (stratified according to sex and age), creating an unparalleled opportunity for assessing the determinants of human immunologic variance. Herein, we define the criteria utilized for participant enrollment, and highlight the key data that were collected for correlative studies. In this report, we analyzed biological correlates of sex, age, smoking-habits, metabolic score and CMV infection. We characterized and identified unique risk factors among healthy donors, as compared to studies that have focused on the general population or disease cohorts. Finally, we highlight sex-bias in the thresholds used for metabolic score determination and recommend a deeper examination of current guidelines. In sum, our clinical design, standardized sample collection strategies, and epidemiological data analyses have established the foundation for defining variability within human immune responses.
Bayesian hierarchical analysis of within-units variances in repeated measures experiments.
Ten Have, T R; Chinchilli, V M
1994-09-30
We develop hierarchical Bayesian models for biomedical data that consist of multiple measurements on each individual under each of several conditions. The focus is on investigating differences in within-subject variation between conditions. We present both population-level and individual-level comparisons. We extend the partial likelihood models of Chinchilli et al. with a unique Bayesian hierarchical framework for variance components and associated degrees of freedom. We use the Gibbs sampler to estimate posterior marginal distributions for the parameters of the Bayesian hierarchical models. The application involves a comparison of two cholesterol analysers each applied repeatedly to a sample of subjects. Both the partial likelihood and Bayesian approaches yield similar results, although confidence limits tend to be wider under the Bayesian models.
Fidelity between Gaussian mixed states with quantum state quadrature variances
NASA Astrophysics Data System (ADS)
Hai-Long, Zhang; Chun, Zhou; Jian-Hong, Shi; Wan-Su, Bao
2016-04-01
In this paper, from the original definition of fidelity in a pure state, we first give a well-defined expansion fidelity between two Gaussian mixed states. It is related to the variances of output and input states in quantum information processing. It is convenient to quantify the quantum teleportation (quantum clone) experiment since the variances of the input (output) state are measurable. Furthermore, we also give a conclusion that the fidelity of a pure input state is smaller than the fidelity of a mixed input state in the same quantum information processing. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002) and the Foundation of Science and Technology on Information Assurance Laboratory (Grant No. KJ-14-001).
Variable variance Preisach model for multilayers with perpendicular magnetic anisotropy
NASA Astrophysics Data System (ADS)
Franco, A. F.; Gonzalez-Fuentes, C.; Morales, R.; Ross, C. A.; Dumas, R.; Åkerman, J.; Garcia, C.
2016-08-01
We present a variable variance Preisach model that fully accounts for the different magnetization processes of a multilayer structure with perpendicular magnetic anisotropy by adjusting the evolution of the interaction variance as the magnetization changes. We successfully compare in a quantitative manner the results obtained with this model to experimental hysteresis loops of several [CoFeB/Pd ] n multilayers. The effect of the number of repetitions and the thicknesses of the CoFeB and Pd layers on the magnetization reversal of the multilayer structure is studied, and it is found that many of the observed phenomena can be attributed to an increase of the magnetostatic interactions and subsequent decrease of the size of the magnetic domains. Increasing the CoFeB thickness leads to the disappearance of the perpendicular anisotropy, and such a minimum thickness of the Pd layer is necessary to achieve an out-of-plane magnetization.
Variance reduction methods applied to deep-penetration problems
Cramer, S.N.
1984-01-01
All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course.
Compounding approach for univariate time series with nonstationary variances
NASA Astrophysics Data System (ADS)
Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich
2015-12-01
A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.
Analysis of variance in spectroscopic imaging data from human tissues.
Kwak, Jin Tae; Reddy, Rohith; Sinha, Saurabh; Bhargava, Rohit
2012-01-17
The analysis of cell types and disease using Fourier transform infrared (FT-IR) spectroscopic imaging is promising. The approach lacks an appreciation of the limits of performance for the technology, however, which limits both researcher efforts in improving the approach and acceptance by practitioners. One factor limiting performance is the variance in data arising from biological diversity, measurement noise or from other sources. Here we identify the sources of variation by first employing a high throughout sampling platform of tissue microarrays (TMAs) to record a sufficiently large and diverse set data. Next, a comprehensive set of analysis of variance (ANOVA) models is employed to analyze the data. Estimating the portions of explained variation, we quantify the primary sources of variation, find the most discriminating spectral metrics, and recognize the aspects of the technology to improve. The study provides a framework for the development of protocols for clinical translation and provides guidelines to design statistically valid studies in the spectroscopic analysis of tissue.
Climate variance influence on the non-stationary plankton dynamics.
Molinero, Juan Carlos; Reygondeau, Gabriel; Bonnet, Delphine
2013-08-01
We examined plankton responses to climate variance by using high temporal resolution data from 1988 to 2007 in the Western English Channel. Climate variability modified both the magnitude and length of the seasonal signal of sea surface temperature, as well as the timing and depth of the thermocline. These changes permeated the pelagic system yielding conspicuous modifications in the phenology of autotroph communities and zooplankton. The climate variance envelope, thus far little considered in climate-plankton studies, is closely coupled with the non-stationary dynamics of plankton, and sheds light on impending ecological shifts and plankton structural changes. Our study calls for the integration of the non-stationary relationship between climate and plankton in prognostic models on the productivity of marine ecosystems.
A surface layer variance heat budget for ENSO
NASA Astrophysics Data System (ADS)
Boucharel, Julien; Timmermann, Axel; Santoso, Agus; England, Matthew H.; Jin, Fei-Fei; Balmaseda, Magdalena A.
2015-05-01
Characteristics of the El Niño-Southern Oscillation (ENSO), such as frequency, propagation, spatial extent, and amplitude, strongly depend on the climatological background state of the tropical Pacific. Multidecadal changes in the ocean mean state are hence likely to modulate ENSO properties. To better link background state variations with low-frequency amplitude changes of ENSO, we develop a diagnostic framework that determines locally the contributions of different physical feedback terms on the ocean surface temperature variance. Our analysis shows that multidecadal changes of ENSO variance originate from the delicate balance between the background-state-dependent positive thermocline feedback and the atmospheric damping of sea surface temperatures anomalies. The role of higher-order processes and atmospheric and oceanic nonlinearities is also discussed. The diagnostic tool developed here can be easily applied to other tropical ocean areas and climate phenomena.
Methods for variance reduction in Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Bixler, Joel N.; Hokr, Brett H.; Winblad, Aidan; Elpers, Gabriel; Zollars, Byron; Thomas, Robert J.
2016-03-01
Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, due to the probabilistic nature of these simulations, large numbers of photons are often required in order to generate relevant results. Here, we present methods for reduction in the variance of dose distribution in a computational volume. Dose distribution is computed via tracing of a large number of rays, and tracking the absorption and scattering of the rays within discrete voxels that comprise the volume. Variance reduction is shown here using quasi-random sampling, interaction forcing for weakly scattering media, and dose smoothing via bi-lateral filtering. These methods, along with the corresponding performance enhancements are detailed here.
Response variance in functional maps: neural darwinism revisited.
Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei
2013-01-01
The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.
Diffusion and chaos from near AdS2 horizons
NASA Astrophysics Data System (ADS)
Blake, Mike; Donos, Aristomenis
2017-02-01
We calculate the thermal diffusivity D = κ/c ρ and butterfly velocity v B in holographic models that flow to AdS2 × R d fixed points in the infra-red. We show that both these quantities are governed by the same irrelevant deformation of AdS2 and hence establish a simple relationship between them. When this deformation corresponds to a universal dilaton mode of dimension Δ = 2 then this relationship is always given by D = v B 2 /(2 πT).
Analysis of Variance in the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
Deloach, Richard
2010-01-01
This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.
The Third-Difference Approach to Modified Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
1995-01-01
This study gives strategies for estimating the modified Allan variance (mvar) and formulas for computing the equivalent degrees of freedom (edf) of the estimators. A third-difference formulation of mvar leads to a tractable formula for edf in the presence of power-law phase noise. The effect of estimation stride on edf is tabulated. First-degree rational-function approximations for edf are derived.
Evaluation of climate modeling factors impacting the variance of streamflow
NASA Astrophysics Data System (ADS)
Al Aamery, N.; Fox, J. F.; Snyder, M.
2016-11-01
The present contribution quantifies the relative importance of climate modeling factors and chosen response variables upon controlling the variance of streamflow forecasted with global climate model (GCM) projections, which has not been attempted in previous literature to our knowledge. We designed an experiment that varied climate modeling factors, including GCM type, project phase, emission scenario, downscaling method, and bias correction. The streamflow response variable was also varied and included forecasted streamflow and difference in forecast and hindcast streamflow predictions. GCM results and the Soil Water Assessment Tool (SWAT) were used to predict streamflow for a wet, temperate watershed in central Kentucky USA. After calibrating the streamflow model, 112 climate realizations were simulated within the streamflow model and then analyzed on a monthly basis using analysis of variance. Analysis of variance results indicate that the difference in forecast and hindcast streamflow predictions is a function of GCM type, climate model project phase, and downscaling approach. The prediction of forecasted streamflow is a function of GCM type, project phase, downscaling method, emission scenario, and bias correction method. The results indicate the relative importance of the five climate modeling factors when designing streamflow prediction ensembles and quantify the reduction in uncertainty associated with coupling the climate results with the hydrologic model when subtracting the hindcast simulations. Thereafter, analysis of streamflow prediction ensembles with different numbers of realizations show that use of all available realizations is unneeded for the study system, so long as the ensemble design is well balanced. After accounting for the factors controlling streamflow variance, results show that predicted average monthly change in streamflow tends to follow precipitation changes and result in a net increase in the average annual precipitation and
Stochastic variance models in discrete time with feedforward neural networks.
Andoh, Charles
2009-07-01
The study overcomes the estimation difficulty in stochastic variance models for discrete financial time series with feedforward neural networks. The volatility function is estimated semiparametrically. The model is used to estimate market risk, taking into account not only the time series of interest but extra information on the market. As an application, some stock prices series are studied and compared with the nonlinear ARX-ARCHX model.
Modality-Driven Classification and Visualization of Ensemble Variance
Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.
2016-10-01
Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.
Selection and genetic (co)variance in bighorn sheep.
Coltman, David W; O'Donoghue, Paul; Hogg, John T; Festa-Bianchet, Marco
2005-06-01
Genetic theory predicts that directional selection should deplete additive genetic variance for traits closely related to fitness, and may favor the maintenance of alleles with antagonistically pleiotropic effects on fitness-related traits. Trait heritability is therefore expected to decline with the degree of association with fitness, and some genetic correlations between selected traits are expected to be negative. Here we demonstrate a negative relationship between trait heritability and association with lifetime reproductive success in a wild population of bighorn sheep (Ovis canadensis) at Ram Mountain, Alberta, Canada. Lower heritability for fitness-related traits, however, was not wholly a consequence of declining genetic variance, because those traits showed high levels of residual variance. Genetic correlations estimated between pairs of traits with significant heritability were positive. Principal component analyses suggest that positive relationships between morphometric traits constitute the main axis of genetic variation. Trade-offs in the form of negative genetic or phenotypic correlations among the traits we have measured do not appear to constrain the potential for evolution in this population.
Relationship between Allan variances and Kalman Filter parameters
NASA Technical Reports Server (NTRS)
Vandierendonck, A. J.; Mcgraw, J. B.; Brown, R. G.
1984-01-01
A relationship was constructed between the Allan variance parameters (H sub z, H sub 1, H sub 0, H sub -1 and H sub -2) and a Kalman Filter model that would be used to estimate and predict clock phase, frequency and frequency drift. To start with the meaning of those Allan Variance parameters and how they are arrived at for a given frequency source is reviewed. Although a subset of these parameters is arrived at by measuring phase as a function of time rather than as a spectral density, they all represent phase noise spectral density coefficients, though not necessarily that of a rational spectral density. The phase noise spectral density is then transformed into a time domain covariance model which can then be used to derive the Kalman Filter model parameters. Simulation results of that covariance model are presented and compared to clock uncertainties predicted by Allan variance parameters. A two state Kalman Filter model is then derived and the significance of each state is explained.
Reduced Variance for Material Sources in Implicit Monte Carlo
Urbatsch, Todd J.
2012-06-25
Implicit Monte Carlo (IMC), a time-implicit method due to Fleck and Cummings, is used for simulating supernovae and inertial confinement fusion (ICF) systems where x-rays tightly and nonlinearly interact with hot material. The IMC algorithm represents absorption and emission within a timestep as an effective scatter. Similarly, the IMC time-implicitness splits off a portion of a material source directly into the radiation field. We have found that some of our variance reduction and particle management schemes will allow large variances in the presence of small, but important, material sources, as in the case of ICF hot electron preheat sources. We propose a modification of our implementation of the IMC method in the Jayenne IMC Project. Instead of battling the sampling issues associated with a small source, we bypass the IMC implicitness altogether and simply deterministically update the material state with the material source if the temperature of the spatial cell is below a user-specified cutoff. We describe the modified method and present results on a test problem that show the elimination of variance for small sources.
Sample variance of non-Gaussian sky distributions
NASA Astrophysics Data System (ADS)
Luo, Xiaochun
1995-02-01
Non-Gaussian distributions of cosmic microwave background (CMB) anistropics have been proposed to reconcile the discrepancies between different experiments at half-degree scales (Coulson et al. 1994). Each experiment probes a different part of the sky, furthermore, sky coverage is very small, hence the sample variance of each experiment can be large, especially when the sky signal is non-Gaussian. We model the degree-scale CMB sky as a chin exp 2 field with n-degress of freedom and show that the sample variance is enhanced over that a Gaussian distribution by a factor of (n + 6)/n. The sample variance for different experiments are calculated, both for Gaussian and non-Gaussian distributions. We also show that if the distribution is highly non-Gaussian (n less than or approximately = 4) at half-degree scales, than the non-Gaussian signature of the CMB could be detected in the FIRS map, though probably not in the Cosmic Background Explorer (COBE) map.
Cosmic variance of the galaxy cluster weak lensing signal
Gruen, D.; Seitz, S.; Becker, M. R.; ...
2015-04-13
Intrinsic variations of the projected density profiles of clusters of galaxies at fixed mass are a source of uncertainty for cluster weak lensing. We present a semi-analytical model to account for this effect, based on a combination of variations in halo concentration, ellipticity and orientation, and the presence of correlated haloes. We calibrate the parameters of our model at the 10 per cent level to match the empirical cosmic variance of cluster profiles at M200m ≈ 1014…1015h–1M⊙, z = 0.25…0.5 in a cosmological simulation. We show that weak lensing measurements of clusters significantly underestimate mass uncertainties if intrinsic profile variationsmore » are ignored, and that our model can be used to provide correct mass likelihoods. Effects on the achievable accuracy of weak lensing cluster mass measurements are particularly strong for the most massive clusters and deep observations (with ≈20 per cent uncertainty from cosmic variance alone at M200m ≈ 1015h–1M⊙ and z = 0.25), but significant also under typical ground-based conditions. We show that neglecting intrinsic profile variations leads to biases in the mass-observable relation constrained with weak lensing, both for intrinsic scatter and overall scale (the latter at the 15 per cent level). Furthermore, these biases are in excess of the statistical errors of upcoming surveys and can be avoided if the cosmic variance of cluster profiles is accounted for.« less
Analysis of micrometeorological data using a two sample variance
NASA Astrophysics Data System (ADS)
Werle, Peter; Falge, Eva
2010-05-01
In ecosystem research infrared gas analyzers are increasingly used to measure fluxes of carbon dioxide, water vapour, methane, nitrous oxide and even stable carbon isotopes. As these complex measurement devices under field conditions cannot be considered as absolutely stable, drift characterisation is an issue to distinguish between atmospheric data and sensor drift. In this paper the concept of the two sample variance is utilized in analogy to previous stability investigations to characterize the stationarity of both, spectroscopic measurements of concentration time series and micrometeorological data in the time domain, which is a prerequisite for covariance calculations. As an example, the method is applied to assess the time constant for detrending of time series data and the optimum trace gas flux integration time. The method described here provides information similar to existing characterizations as the ogive analysis, the normalized error variance of the second order moment and the spectral characteristics of turbulence in the inertial subrange. The method is easy to implement and, therefore, well suited to assist as a useful tool for a routine data quality check for both, new practitioners and experts in the field. Werle, P., Time domain characterization of micrometeorological data based on a two sample variance. Agric. Forest Meteorol. (2010), doi:10.1016/j.agrformet.2009.12.007
Dynamic Programming Using Polar Variance for Image Segmentation.
Rosado-Toro, Jose A; Altbach, Maria I; Rodriguez, Jeffrey J
2016-10-06
When using polar dynamic programming (PDP) for image segmentation, the object size is one of the main features used. This is because if size is left unconstrained the final segmentation may include high-gradient regions that are not associated with the object. In this paper, we propose a new feature, polar variance, which allows the algorithm to segment objects of different sizes without the need for training data. The polar variance is the variance in a polar region between a user-selected origin and a pixel we want to analyze. We also incorporate a new technique that allows PDP to segment complex shapes by finding low-gradient regions and growing them. The experimental analysis consisted on comparing our technique with different active contour segmentation techniques on a series of tests. The tests consisted on robustness to additive Gaussian noise, segmentation accuracy with different grayscale images and finally robustness to algorithm-specific parameters. Experimental results show that our technique performs favorably when compared to other segmentation techniques.
42 CFR 488.64 - Remote facility variances for utilization review requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... applicable. (c) The request for variance shall document the requesting facility's inability to meet the... the previous six months; (4) As relevant to the request, the names of all physicians on the active... variance. (h) The Secretary, in granting a variance, will specify the period for which the variance...
CYP1B1: a unique gene with unique characteristics.
Faiq, Muneeb A; Dada, Rima; Sharma, Reetika; Saluja, Daman; Dada, Tanuj
2014-01-01
CYP1B1, a recently described dioxin inducible oxidoreductase, is a member of the cytochrome P450 superfamily involved in the metabolism of estradiol, retinol, benzo[a]pyrene, tamoxifen, melatonin, sterols etc. It plays important roles in numerous physiological processes and is expressed at mRNA level in many tissues and anatomical compartments. CYP1B1 has been implicated in scores of disorders. Analyses of the recent studies suggest that CYP1B1 can serve as a universal/ideal cancer marker and a candidate gene for predictive diagnosis. There is plethora of literature available about certain aspects of CYP1B1 that have not been interpreted, discussed and philosophized upon. The present analysis examines CYP1B1 as a peculiar gene with certain distinctive characteristics like the uniqueness in its chromosomal location, gene structure and organization, involvement in developmentally important disorders, tissue specific, not only expression, but splicing, potential as a universal cancer marker due to its involvement in key aspects of cellular metabolism, use in diagnosis and predictive diagnosis of various diseases and the importance and function of CYP1B1 mRNA in addition to the regular translation. Also CYP1B1 is very difficult to express in heterologous expression systems, thereby, halting its functional studies. Here we review and analyze these exceptional and startling characteristics of CYP1B1 with inputs from our own experiences in order to get a better insight into its molecular biology in health and disease. This may help to further understand the etiopathomechanistic aspects of CYP1B1 mediated diseases paving way for better research strategies and improved clinical management.
NASA Technical Reports Server (NTRS)
Stothers, R. B.
1984-01-01
The possible cause of the densest and most persistent dry fog on record, which was observed in Europe and the Middle East during AD 536 and 537, is discussed. The fog's long duration toward the south and the high sulfuric acid signal detected in Greenland in ice cores dated around AD 540 support the theory that the fog was due to the explosion of the Rabaul volcano, the occurrence of which has been dated at about AD 540 by the radiocarbon method.
Coset construction of AdS particle dynamics
NASA Astrophysics Data System (ADS)
Heinze, Martin; Jorjadze, George; Megrelidze, Luka
2017-01-01
We analyze the dynamics of the AdSN+1 particle realized on the coset SO(2, N)/SO (1,N). Hamiltonian reduction provides the physical phase space in terms of the coadjoint orbit obtained by boosting a timelike element of 𝔰𝔬(2, N). We show equivalence of this approach to geometric quantization and to the SO(N) covariant oscillator description, for which the boost generators entail a complicated operator ordering. As an alternative scheme, we introduce dual oscillator variables and derive their algebra at the classical and the quantum levels. This simplifies the calculations of the commutators for the boost generators and leads to unitary irreducible representations of 𝔰𝔬(2, N) for all admissible values of the mass parameter. We furthermore discuss an SO(N) covariant supersymmetric extension of the oscillator quantization, with its realization for superparticles in AdS2 and AdS3 given by recent works.
Entanglement temperature and perturbed AdS3 geometry
NASA Astrophysics Data System (ADS)
Levine, G. C.; Caravan, B.
2016-06-01
Generalizing the first law of thermodynamics, the increase in entropy density δ S (x ) of a conformal field theory (CFT) is proportional to the increase in energy density, δ E (x ) , of a subsystem divided by a spatially dependent entanglement temperature, TE(x ) , a fixed parameter determined by the geometry of the subsystem, crossing over to thermodynamic temperature at high temperatures. In this paper we derive a generalization of the thermodynamic Clausius relation, showing that deformations of the CFT by marginal operators are associated with spatial temperature variations, δ TE(x ) , and spatial energy correlations play the role of specific heat. Using AdS/CFT duality we develop a relationship between a perturbation in the local entanglement temperature of the CFT and the perturbation of the bulk AdS metric. In two dimensions, we demonstrate a method through which direct diagonalizations of the boundary quantum theory may be used to construct geometric perturbations of AdS3 .
AdS5 backgrounds with 24 supersymmetries
NASA Astrophysics Data System (ADS)
Beck, S.; Gutowski, J.; Papadopoulos, G.
2016-06-01
We prove a non-existence theorem for smooth AdS 5 solutions with connected, compact without boundary internal space that preserve strictly 24 supersymmetries. In particular, we show that D = 11 supergravity does not admit such solutions, and that all such solutions of IIB supergravity are locally isometric to the AdS 5 × S 5 maximally supersymmetric background. Furthermore, we prove that (massive) IIA supergravity also does not admit such solutions, provided that the homogeneity conjecture for massive IIA supergravity is valid. In the context of AdS/CFT these results imply that if gravitational duals for strictly mathcal{N}=3 superconformal theories in 4-dimensions exist, they are either singular or their internal spaces are not compact.
Loberg, A; Dürr, J W; Fikse, W F; Jorjani, H; Crooks, L
2015-10-01
The amount of variance captured in genetic estimations may depend on whether a pedigree-based or genomic relationship matrix is used. The purpose of this study was to investigate the genetic variance as well as the variance of predicted genetic merits (PGM) using pedigree-based or genomic relationship matrices in Brown Swiss cattle. We examined a range of traits in six populations amounting to 173 population-trait combinations. A main aim was to determine how using different relationship matrices affect variance estimation. We calculated ratios between different types of estimates and analysed the impact of trait heritability and population size. The genetic variances estimated by REML using a genomic relationship matrix were always smaller than the variances that were similarly estimated using a pedigree-based relationship matrix. The variances from the genomic relationship matrix became closer to estimates from a pedigree relationship matrix as heritability and population size increased. In contrast, variances of predicted genetic merits obtained using a genomic relationship matrix were mostly larger than variances of genetic merit predicted using pedigree-based relationship matrix. The ratio of the genomic to pedigree-based PGM variances decreased as heritability and population size rose. The increased variance among predicted genetic merits is important for animal breeding because this is one of the factors influencing genetic progress.
The Misattribution of Summers in Teacher Value-Added
ERIC Educational Resources Information Center
Atteberry, Allison
2012-01-01
This paper investigates the extent to which spring-to-spring testing timelines bias teacher value-added as a result of conflating summer and school-year learning. Using a unique dataset that contains both fall and spring standardized test scores, the author examines the patterns in school-year versus summer learning. She estimates value-added…
Teacher Effects, Value-Added Models, and Accountability
ERIC Educational Resources Information Center
Konstantopoulos, Spyros
2014-01-01
Background: In the last decade, the effects of teachers on student performance (typically manifested as state-wide standardized tests) have been re-examined using statistical models that are known as value-added models. These statistical models aim to compute the unique contribution of the teachers in promoting student achievement gains from grade…
Symbols are not uniquely human.
Ribeiro, Sidarta; Loula, Angelo; de Araújo, Ivan; Gudwin, Ricardo; Queiroz, João
2007-01-01
Modern semiotics is a branch of logics that formally defines symbol-based communication. In recent years, the semiotic classification of signs has been invoked to support the notion that symbols are uniquely human. Here we show that alarm-calls such as those used by African vervet monkeys (Cercopithecus aethiops), logically satisfy the semiotic definition of symbol. We also show that the acquisition of vocal symbols in vervet monkeys can be successfully simulated by a computer program based on minimal semiotic and neurobiological constraints. The simulations indicate that learning depends on the tutor-predator ratio, and that apprentice-generated auditory mistakes in vocal symbol interpretation have little effect on the learning rates of apprentices (up to 80% of mistakes are tolerated). In contrast, just 10% of apprentice-generated visual mistakes in predator identification will prevent any vocal symbol to be correctly associated with a predator call in a stable manner. Tutor unreliability was also deleterious to vocal symbol learning: a mere 5% of "lying" tutors were able to completely disrupt symbol learning, invariably leading to the acquisition of incorrect associations by apprentices. Our investigation corroborates the existence of vocal symbols in a non-human species, and indicates that symbolic competence emerges spontaneously from classical associative learning mechanisms when the conditioned stimuli are self-generated, arbitrary and socially efficacious. We propose that more exclusive properties of human language, such as syntax, may derive from the evolution of higher-order domains for neural association, more removed from both the sensory input and the motor output, able to support the gradual complexification of grammatical categories into syntax.
White matter morphometric changes uniquely predict children's reading acquisition.
Myers, Chelsea A; Vandermosten, Maaike; Farris, Emily A; Hancock, Roeland; Gimenez, Paul; Black, Jessica M; Casto, Brandi; Drahos, Miroslav; Tumber, Mandeep; Hendren, Robert L; Hulme, Charles; Hoeft, Fumiko
2014-10-01
This study examined whether variations in brain development between kindergarten and Grade 3 predicted individual differences in reading ability at Grade 3. Structural MRI measurements indicated that increases in the volume of two left temporo-parietal white matter clusters are unique predictors of reading outcomes above and beyond family history, socioeconomic status, and cognitive and preliteracy measures at baseline. Using diffusion MRI, we identified the left arcuate fasciculus and superior corona radiata as key fibers within the two clusters. Bias-free regression analyses using regions of interest from prior literature revealed that volume changes in temporo-parietal white matter, together with preliteracy measures, predicted 56% of the variance in reading outcomes. Our findings demonstrate the important contribution of developmental differences in areas of left dorsal white matter, often implicated in phonological processing, as a sensitive early biomarker for later reading abilities, and by extension, reading difficulties.
Unique contributions of metacognition and cognition to depressive symptoms.
Yilmaz, Adviye Esin; Gençöz, Tülin; Wells, Adrian
2015-01-01
This study attempts to examine the unique contributions of "cognitions" or "metacognitions" to depressive symptoms while controlling for their intercorrelations and comorbid anxiety. Two-hundred-and-fifty-one university students participated in the study. Two complementary hierarchical multiple regression analyses were performed, in which symptoms of depression were regressed on the dysfunctional attitudes (DAS-24 subscales) and metacognition scales (Negative Beliefs about Rumination Scale [NBRS] and Positive Beliefs about Rumination Scale [PBRS]). Results showed that both NBRS and PBRS individually explained a significant amount of variance in depressive symptoms above and beyond dysfunctional schemata while controlling for anxiety. Although dysfunctional attitudes as a set significantly predicted depressive symptoms after anxiety and metacognitions were controlled for, they were weaker than metacognitive variables and none of the DAS-24 subscales contributed individually. Metacognitive beliefs about ruminations appeared to contribute more to depressive symptoms than dysfunctional beliefs in the "cognitive" domain.
Lorentzian AdS geometries, wormholes, and holography
Arias, Raul E.; Silva, Guillermo A.; Botta Cantcheff, Marcelo
2011-03-15
We investigate the structure of two-point functions for the quantum field theory dual to an asymptotically Lorentzian Anti de Sitter (AdS) wormhole. The bulk geometry is a solution of five-dimensional second-order Einstein-Gauss-Bonnet gravity and causally connects two asymptotically AdS spacetimes. We revisit the Gubser-Klebanov-Polyakov-Witten prescription for computing two-point correlation functions for dual quantum field theories operators O in Lorentzian signature and we propose to express the bulk fields in terms of the independent boundary values {phi}{sub 0}{sup {+-}} at each of the two asymptotic AdS regions; along the way we exhibit how the ambiguity of normalizable modes in the bulk, related to initial and final states, show up in the computations. The independent boundary values are interpreted as sources for dual operators O{sup {+-}} and we argue that, apart from the possibility of entanglement, there exists a coupling between the degrees of freedom living at each boundary. The AdS{sub 1+1} geometry is also discussed in view of its similar boundary structure. Based on the analysis, we propose a very simple geometric criterion to distinguish coupling from entanglement effects among two sets of degrees of freedom associated with each of the disconnected parts of the boundary.
The Effect of Summer on Value-Added Assessments of Teacher and School Performance
ERIC Educational Resources Information Center
Palardy, Gregory J.; Peng, Luyao
2015-01-01
This study examines the effects of including the summer period on value-added assessments (VAA) of teacher and school performance at the early grades. The results indicate that 40-62% of the variance in VAA estimates originates from the summer period, depending on the outcome (i.e., reading or math achievement gains). Furthermore, when summer is…
Ultrasonic beam fluctuation and flaw signal variance in inhomogeneous media
NASA Astrophysics Data System (ADS)
Ahmed, S.; Roberts, R.; Margetan, F.
2000-05-01
This paper examines the effect of forward scattering on ultrasonic beam propagation and flaw signal amplitude in inhomogeneous material microstructures. A beam propagating through a weakly-scattering, randomly inhomogeneous medium will display random fluctuations in amplitude and phase, attributable to forward scattering. Correspondingly, the signal received from a given flaw at a given position in the beam volume will fluctuate as the beam and flaw are simultaneously scanned throughout the volume of an inhomogeneous host medium. These effects have been prominently observed in the inspection of titanium. For example, maps of beam amplitude profiles after transmission through titanium reveal severe distortion of beam amplitude and phase. Similarly, signals from "identical" flat bottom holes (FBH) at equal depths but different lateral positions in titanium display a random variation in amplitude. Interestingly, it has been noted that this FBH signal variance varies inversely to the beam diameter, that is, signal variance normalized to the mean signal amplitude is a minimum when the flaw is in the focal zone of a focused bearn. As this observation has great significance to the inspection of titanium, a model, prediction of this phenomenon is being sought. In the work reported here, beam propagation is formulated as a volumetric integral equation employing the Green function for the homogeneous spatial mean of the medium. The integral equation is solved using iterative methods. Preliminary work considering scalar two-dimensional propagation in inhomogeneous media has predicted a flaw signal variance that displays an inverse relation to beam diameter, thus reproducing the qualitative behavior seen in experimental data in titanium. Current work is extending the preliminary two-dimensional scalar result to three-dimensional elasticity, representing propagation in an actual titanium microstructure. Progress on this effort will be reported.
Minimum Variance Approaches to Ultrasound Pixel-Based Beamforming.
Nguyen, Nghia Q; Prager, Richard W
2017-02-01
We analyze the principles underlying minimum variance distortionless response (MVDR) beamforming in order to integrate it into a pixel-based algorithm. There is a challenge posed by the low echo signal-to-noise ratio (eSNR) when calculating beamformer contributions at pixels far away from the beam centreline. Together with the well-known scarcity of samples for covariance matrix estimation, this reduces the beamformer performance and degrades the image quality. To address this challenge, we implement the MVDR algorithm in two different ways. First, we develop the conventional minimum variance pixel-based (MVPB) beamformer that performs the MVDR after the pixel-based superposition step. This involves a combination of methods in the literature, extended over multiple transmits to increase the eSNR. Then we propose the coherent MVPB beamformer, where the MVDR is applied to data within individual transmits. Based on pressure field analysis, we develop new algorithms to improve the data alignment and matrix estimation, and hence overcome the low-eSNR issue. The methods are demonstrated on data acquired with an ultrasound open platform. The results show the coherent MVPB beamformer substantially outperforms the conventional MVPB in a series of experiments, including phantom and in vivo studies. Compared to the unified pixel-based beamformer, the newest delay-and-sum algorithm in [1], the coherent MVPB performs well on regions that conform to the diffuse scattering assumptions on which the minimum variance principles are based. It produces less good results for parts of the image that are dominated by specular reflections.
FMRI group analysis combining effect estimates and their variances
Chen, Gang; Saad, Ziad S.; Nath, Audrey R.; Beauchamp, Michael S.; Cox, Robert W.
2012-01-01
Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an approach that does not make such strong assumptions, and present a computationally efficient frequentist approach to FMRI group analysis, which we term mixed-effects multilevel analysis (MEMA), that incorporates both the variability across subjects and the precision estimate of each effect of interest from individual subject analyses. On average, the more accurate tests result in higher statistical power, especially when conventional variance assumptions do not hold, or in the presence of outliers. In addition, various heterogeneity measures are available with MEMA that may assist the investigator in further improving the modeling. Our method allows group effect t-tests and comparisons among conditions and among groups. In addition, it has the capability to incorporate subject-specific covariates such as age, IQ, or behavioral data. Simulations were performed to illustrate power comparisons and the capability of controlling type I errors among various significance testing methods, and the results indicated that the testing statistic we adopted struck a good balance between power gain and type I error control. Our approach is instantiated in an open-source, freely distributed program that may be used on any dataset stored in the universal neuroimaging file transfer (NIfTI) format. To date, the main impediment for more accurate testing that incorporates both within- and cross-subject variability has been the high computational cost. Our efficient implementation makes this approach
An Empirical Temperature Variance Source Model in Heated Jets
NASA Technical Reports Server (NTRS)
Khavaran, Abbas; Bridges, James
2012-01-01
An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.
Regression between earthquake magnitudes having errors with known variances
NASA Astrophysics Data System (ADS)
Pujol, Jose
2016-07-01
Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.
Cosmic variance of the galaxy cluster weak lensing signal
Gruen, D.; Seitz, S.; Becker, M. R.; Friedrich, O.; Mana, A.
2015-04-13
Intrinsic variations of the projected density profiles of clusters of galaxies at fixed mass are a source of uncertainty for cluster weak lensing. We present a semi-analytical model to account for this effect, based on a combination of variations in halo concentration, ellipticity and orientation, and the presence of correlated haloes. We calibrate the parameters of our model at the 10 per cent level to match the empirical cosmic variance of cluster profiles at M_{200m} ≈ 10^{14}…10^{15}h^{–1}M_{⊙}, z = 0.25…0.5 in a cosmological simulation. We show that weak lensing measurements of clusters significantly underestimate mass uncertainties if intrinsic profile variations are ignored, and that our model can be used to provide correct mass likelihoods. Effects on the achievable accuracy of weak lensing cluster mass measurements are particularly strong for the most massive clusters and deep observations (with ≈20 per cent uncertainty from cosmic variance alone at M_{200m} ≈ 10^{15}h^{–1}M_{⊙} and z = 0.25), but significant also under typical ground-based conditions. We show that neglecting intrinsic profile variations leads to biases in the mass-observable relation constrained with weak lensing, both for intrinsic scatter and overall scale (the latter at the 15 per cent level). Furthermore, these biases are in excess of the statistical errors of upcoming surveys and can be avoided if the cosmic variance of cluster profiles is accounted for.
Fringe biasing: A variance reduction technique for optically thick meshes
Smedley-Stevenson, R. P.
2013-07-01
Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)
Monte Carlo calculation of specific absorbed fractions: variance reduction techniques
NASA Astrophysics Data System (ADS)
Díaz-Londoño, G.; García-Pareja, S.; Salvat, F.; Lallena, A. M.
2015-04-01
The purpose of the present work is to calculate specific absorbed fractions using variance reduction techniques and assess the effectiveness of these techniques in improving the efficiency (i.e. reducing the statistical uncertainties) of simulation results in cases where the distance between the source and the target organs is large and/or the target organ is small. The variance reduction techniques of interaction forcing and an ant colony algorithm, which drives the application of splitting and Russian roulette, were applied in Monte Carlo calculations performed with the code penelope for photons with energies from 30 keV to 2 MeV. In the simulations we used a mathematical phantom derived from the well-known MIRD-type adult phantom. The thyroid gland was assumed to be the source organ and urinary bladder, testicles, uterus and ovaries were considered as target organs. Simulations were performed, for each target organ and for photons with different energies, using these variance reduction techniques, all run on the same processor and during a CPU time of 1.5 · 105 s. For energies above 100 keV both interaction forcing and the ant colony method allowed reaching relative uncertainties of the average absorbed dose in the target organs below 4% in all studied cases. When these two techniques were used together, the uncertainty was further reduced, by a factor of 0.5 or less. For photons with energies below 100 keV, an adapted initialization of the ant colony algorithm was required. By using interaction forcing and the ant colony algorithm, realistic values of the specific absorbed fractions can be obtained with relative uncertainties small enough to permit discriminating among simulations performed with different Monte Carlo codes and phantoms. The methodology described in the present work can be employed to calculate specific absorbed fractions for arbitrary arrangements, i.e. energy spectrum of primary radiation, phantom model and source and target organs.
Multi-observable Uncertainty Relations in Product Form of Variances
Qin, Hui-Hui; Fei, Shao-Ming; Li-Jost, Xianqing
2016-01-01
We investigate the product form uncertainty relations of variances for n (n ≥ 3) quantum observables. In particular, tight uncertainty relations satisfied by three observables has been derived, which is shown to be better than the ones derived from the strengthened Heisenberg and the generalized Schrödinger uncertainty relations, and some existing uncertainty relation for three spin-half operators. Uncertainty relation of arbitrary number of observables is also derived. As an example, the uncertainty relation satisfied by the eight Gell-Mann matrices is presented. PMID:27498851
Improved Robustness through Population Variance in Ant Colony Optimization
NASA Astrophysics Data System (ADS)
Matthews, David C.; Sutton, Andrew M.; Hains, Doug; Whitley, L. Darrell
Ant Colony Optimization algorithms are population-based Stochastic Local Search algorithms that mimic the behavior of ants, simulating pheromone trails to search for solutions to combinatorial optimization problems. This paper introduces Population Variance, a novel approach to ACO algorithms that allows parameters to vary across the population over time, leading to solution construction differences that are not strictly stochastic. The increased exploration appears to help the search escape from local optima, significantly improving the robustness of the algorithm with respect to suboptimal parameter settings.
Simulation Study Using a New Type of Sample Variance
NASA Technical Reports Server (NTRS)
Howe, D. A.; Lainson, K. J.
1996-01-01
We evaluate with simulated data a new type of sample variance for the characterization of frequency stability. The new statistic (referred to as TOTALVAR and its square root TOTALDEV) is a better predictor of long-term frequency variations than the present sample Allan deviation. The statistical model uses the assumption that a time series of phase or frequency differences is wrapped (periodic) with overall frequency difference removed. We find that the variability at long averaging times is reduced considerably for the five models of power-law noise commonly encountered with frequency standards and oscillators.
Sanz, E.; Voss, C.I.
2006-01-01
Inverse modeling studies employing data collected from the classic Henry seawater intrusion problem give insight into several important aspects of inverse modeling of seawater intrusion problems and effective measurement strategies for estimation of parameters for seawater intrusion. Despite the simplicity of the Henry problem, it embodies the behavior of a typical seawater intrusion situation in a single aquifer. Data collected from the numerical problem solution are employed without added noise in order to focus on the aspects of inverse modeling strategies dictated by the physics of variable-density flow and solute transport during seawater intrusion. Covariances of model parameters that can be estimated are strongly dependent on the physics. The insights gained from this type of analysis may be directly applied to field problems in the presence of data errors, using standard inverse modeling approaches to deal with uncertainty in data. Covariance analysis of the Henry problem indicates that in order to generally reduce variance of parameter estimates, the ideal places to measure pressure are as far away from the coast as possible, at any depth, and the ideal places to measure concentration are near the bottom of the aquifer between the center of the transition zone and its inland fringe. These observations are located in and near high-sensitivity regions of system parameters, which may be identified in a sensitivity analysis with respect to several parameters. However, both the form of error distribution in the observations and the observation weights impact the spatial sensitivity distributions, and different choices for error distributions or weights can result in significantly different regions of high sensitivity. Thus, in order to design effective sampling networks, the error form and weights must be carefully considered. For the Henry problem, permeability and freshwater inflow can be estimated with low estimation variance from only pressure or only
NASA Astrophysics Data System (ADS)
Gauntlett, Jerome P.; Gutowski, Jan B.; Suryanarayana, Nemani V.
2004-11-01
We analyse a one-parameter family of supersymmetric solutions of type IIB supergravity that includes AdS5 × S5. For small values of the parameter the solutions are causally well behaved, but beyond a critical value closed timelike curves (CTCs) appear. The solutions are holographically dual to {\\cal N}=4 supersymmetric Yang Mills theory on a non-conformally flat background with non-vanishing R-currents. We compute the holographic energy momentum tensor for the spacetime and show that it remains finite even when the CTCs appear. The solutions, as well as the uplift of some recently discovered AdS5 black-hole solutions, are shown to preserve precisely two supersymmetries.
Supersymmetric AdS_6 solutions of type IIB supergravity
NASA Astrophysics Data System (ADS)
Kim, Hyojoong; Kim, Nakwoo; Suh, Minwoo
2015-10-01
We study the general requirement for supersymmetric AdS_6 solutions in type IIB supergravity. We employ the Killing spinor technique and study the differential and algebraic relations among various Killing spinor bilinears to find the canonical form of the solutions. Our result agrees precisely with the work of Apruzzi et al. (JHEP 1411:099, 2014), which used the pure spinor technique. Hoping to identify the geometry of the problem, we also computed four-dimensional theory through the dimensional reduction of type IIB supergravity on AdS_6. This effective action is essentially a non-linear sigma model with five scalar fields parametrizing {SL}(3,{R})/{SO}(2,1), modified by a scalar potential and coupled to Einstein gravity in Euclidean signature. We argue that the scalar potential can be explained by a subgroup CSO(1,1,1) subset {SL}(3,{R}) in a way analogous to gauged supergravity.
Universal isolation in the AdS landscape
NASA Astrophysics Data System (ADS)
Danielsson, U. H.; Dibitetto, G.; Vargas, S. C.
2016-12-01
We study the universal conditions for quantum nonperturbative stability against bubble nucleation for pertubatively stable AdS vacua based on positive energy theorems. We also compare our analysis with the preexisting ones in the literature carried out within the thin-wall approximation. The aforementioned criterion is then tested in two explicit examples describing massive type IIA string theory compactified on S3 and S3×S3, respectively. The AdS landscape of both classes of compactifications is known to consist of a set of isolated points. The main result is that all critical points respecting the Breitenlohner-Freedman (BF) bound also turn out be stable at a nonperturbative level. Finally, we speculate on the possible universal features that may be extracted from the above specific examples.
Tachyon inflation in an AdS braneworld with backreaction
NASA Astrophysics Data System (ADS)
Bilić, Neven; Dimitrijevic, Dragoljub D.; Djordjevic, Goran S.; Milosevic, Milan
2017-02-01
We analyze the inflationary scenario based on the tachyon field coupled with the radion of the second Randall-Sundrum model (RSII). The tachyon Lagrangian is derived from the dynamics of a 3-brane moving in the five-dimensional bulk. The AdS5 geometry of the bulk is extended to include the radion. Using the Hamiltonian formalism we find four nonlinear field equations supplemented by the modified Friedmann equations of the RSII braneworld cosmology. After a suitable rescaling we reduce the parameters of our model to only one free parameter related to the brane tension and the AdS5 curvature. We solve the equations numerically assuming a reasonably wide range of initial conditions determined by physical considerations. Varying the free parameter and initial conditions we confront our results with the Planck 2015 data.
Ambitwistors, oscillators and massless fields on AdS5
NASA Astrophysics Data System (ADS)
Uvarov, D. V.
2016-11-01
Positive energy unitary irreducible representations of SU (2 , 2) can be constructed with the aid of bosonic oscillators in (anti)fundamental representation of SU(2)L × SU(2)R that are closely related to Penrose twistors. Starting with the correspondence between the doubleton representations, homogeneous functions on projective twistor space and on-shell generalized Weyl curvature SL (2 , C) spinors and their low-spin counterparts, we study in the similar way the correspondence between the massless representations, homogeneous functions on ambitwistor space and, via the Penrose transform, with the gauge fields on Minkowski boundary of AdS5. The possibilities of reconstructing massless fields on AdS5 and some applications are also discussed.
Generalised structures for N=1 AdS backgrounds
NASA Astrophysics Data System (ADS)
Coimbra, André; Strickland-Constable, Charles
2016-11-01
We expand upon a claim made in a recent paper [arXiv:1411.5721] that generic minimally supersymmetric AdS backgrounds of warped flux compactifications of Type II and M theory can be understood as satisfying a straightforward weak integrability condition in the language of {E}_{d(d)}× {R}+ generalised geometry. Namely, they are spaces admitting a generalised G-structure set by the Killing spinor and with constant singlet generalised intrinsic torsion.
Hydraulic geometry of river cross sections; theory of minimum variance
Williams, Garnett P.
1978-01-01
This study deals with the rates at which mean velocity, mean depth, and water-surface width increase with water discharge at a cross section on an alluvial stream. Such relations often follow power laws, the exponents in which are called hydraulic exponents. The Langbein (1964) minimum-variance theory is examined in regard to its validity and its ability to predict observed hydraulic exponents. The variables used with the theory were velocity, depth, width, bed shear stress, friction factor, slope (energy gradient), and stream power. Slope is often constant, in which case only velocity, depth, width, shear and friction factor need be considered. The theory was tested against a wide range of field data from various geographic areas of the United States. The original theory was intended to produce only the average hydraulic exponents for a group of cross sections in a similar type of geologic or hydraulic environment. The theory does predict these average exponents with a reasonable degree of accuracy. An attempt to forecast the exponents at any selected cross section was moderately successful. Empirical equations are more accurate than the minimum variance, Gauckler-Manning, or Chezy methods. Predictions of the exponent of width are most reliable, the exponent of depth fair, and the exponent of mean velocity poor. (Woodard-USGS)
Worldwide variance in the potential utilization of Gamma Knife radiosurgery.
Hamilton, Travis; Dade Lunsford, L
2016-12-01
OBJECTIVE The role of Gamma Knife radiosurgery (GKRS) has expanded worldwide during the past 3 decades. The authors sought to evaluate whether experienced users vary in their estimate of its potential use. METHODS Sixty-six current Gamma Knife users from 24 countries responded to an electronic survey. They estimated the potential role of GKRS for benign and malignant tumors, vascular malformations, and functional disorders. These estimates were compared with published disease epidemiological statistics and the 2014 use reports provided by the Leksell Gamma Knife Society (16,750 cases). RESULTS Respondents reported no significant variation in the estimated use in many conditions for which GKRS is performed: meningiomas, vestibular schwannomas, and arteriovenous malformations. Significant variance in the estimated use of GKRS was noted for pituitary tumors, craniopharyngiomas, and cavernous malformations. For many current indications, the authors found significant variance in GKRS users based in the Americas, Europe, and Asia. Experts estimated that GKRS was used in only 8.5% of the 196,000 eligible cases in 2014. CONCLUSIONS Although there was a general worldwide consensus regarding many major indications for GKRS, significant variability was noted for several more controversial roles. This expert opinion survey also suggested that GKRS is significantly underutilized for many current diagnoses, especially in the Americas. Future studies should be conducted to investigate health care barriers to GKRS for many patients.
Argentine Population Genetic Structure: Large Variance in Amerindian Contribution
Seldin, Michael F.; Tian, Chao; Shigeta, Russell; Scherbarth, Hugo R.; Silva, Gabriel; Belmont, John W.; Kittles, Rick; Gamron, Susana; Allevi, Alberto; Palatnik, Simon A.; Alvarellos, Alejandro; Paira, Sergio; Caprarulo, Cesar; Guillerón, Carolina; Catoggio, Luis J.; Prigione, Cristina; Berbotto, Guillermo A.; García, Mercedes A.; Perandones, Carlos E.; Pons-Estel, Bernardo A.; Alarcon-Riquelme, Marta E.
2011-01-01
Argentine population genetic structure was examined using a set of 78 ancestry informative markers (AIMs) to assess the contributions of European, Amerindian, and African ancestry in 94 individuals members of this population. Using the Bayesian clustering algorithm STRUCTURE, the mean European contribution was 78%, the Amerindian contribution was 19.4%, and the African contribution was 2.5%. Similar results were found using weighted least mean square method: European, 80.2%; Amerindian, 18.1%; and African, 1.7%. Consistent with previous studies the current results showed very few individuals (four of 94) with greater than 10% African admixture. Notably, when individual admixture was examined, the Amerindian and European admixture showed a very large variance and individual Amerindian contribution ranged from 1.5 to 84.5% in the 94 individual Argentine subjects. These results indicate that admixture must be considered when clinical epidemiology or case control genetic analyses are studied in this population. Moreover, the current study provides a set of informative SNPs that can be used to ascertain or control for this potentially hidden stratification. In addition, the large variance in admixture proportions in individual Argentine subjects shown by this study suggests that this population is appropriate for future admixture mapping studies. PMID:17177183
Concentration variance decay during magma mixing: a volcanic chronometer
NASA Astrophysics Data System (ADS)
Perugini, Diego; de Campos, Cristina P.; Petrelli, Maurizio; Dingwell, Donald B.
2015-09-01
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical “mixing to eruption” time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest.
Concentration variance decay during magma mixing: a volcanic chronometer
NASA Astrophysics Data System (ADS)
Perugini, D.; De Campos, C. P.; Petrelli, M.; Dingwell, D. B.
2015-12-01
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical "mixing to eruption" time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest.
VARIANCE ESTIMATION IN DOMAIN DECOMPOSED MONTE CARLO EIGENVALUE CALCULATIONS
Mervin, Brenden T; Maldonado, G. Ivan; Mosher, Scott W; Evans, Thomas M; Wagner, John C
2012-01-01
The number of tallies performed in a given Monte Carlo calculation is limited in most modern Monte Carlo codes by the amount of memory that can be allocated on a single processor. By using domain decomposition, the calculation is now limited by the total amount of memory available on all processors, allowing for significantly more tallies to be performed. However, decomposing the problem geometry introduces significant issues with the way tally statistics are conventionally calculated. In order to deal with the issue of calculating tally variances in domain decomposed environments for the Shift hybrid Monte Carlo code, this paper presents an alternative approach for reactor scenarios in which an assumption is made that once a particle leaves a domain, it does not reenter the domain. Particles that reenter the domain are instead treated as separate independent histories. This assumption introduces a bias that inevitably leads to under-prediction of the calculated variances for tallies within a few mean free paths of the domain boundaries. However, through the use of different decomposition strategies, primarily overlapping domains, the negative effects of such an assumption can be significantly reduced to within reasonable levels.
Concentration variance decay during magma mixing: a volcanic chronometer
Perugini, Diego; De Campos, Cristina P.; Petrelli, Maurizio; Dingwell, Donald B.
2015-01-01
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing – a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical “mixing to eruption” time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest. PMID:26387555
Variance of the Quantum Dwell Time for a Nonrelativistic Particle
NASA Technical Reports Server (NTRS)
Hahne, Gerhard
2012-01-01
Munoz, Seidel, and Muga [Phys. Rev. A 79, 012108 (2009)], following an earlier proposal by Pollak and Miller [Phys. Rev. Lett. 53, 115 (1984)] in the context of a theory of a collinear chemical reaction, showed that suitable moments of a two-flux correlation function could be manipulated to yield expressions for the mean quantum dwell time and mean square quantum dwell time for a structureless particle scattering from a time-independent potential energy field between two parallel lines in a two-dimensional spacetime. The present work proposes a generalization to a charged, nonrelativistic particle scattering from a transient, spatially confined electromagnetic vector potential in four-dimensional spacetime. The geometry of the spacetime domain is that of the slab between a pair of parallel planes, in particular those defined by constant values of the third (z) spatial coordinate. The mean Nth power, N = 1, 2, 3, . . ., of the quantum dwell time in the slab is given by an expression involving an N-flux-correlation function. All these means are shown to be nonnegative. The N = 1 formula reduces to an S-matrix result published previously [G. E. Hahne, J. Phys. A 36, 7149 (2003)]; an explicit formula for N = 2, and of the variance of the dwell time in terms of the S-matrix, is worked out. A formula representing an incommensurability principle between variances of the output-minus-input flux of a pair of dynamical variables (such as the particle s time flux and others) is derived.
Hidden temporal order unveiled in stock market volatility variance
NASA Astrophysics Data System (ADS)
Shapira, Y.; Kenett, D. Y.; Raviv, Ohad; Ben-Jacob, E.
2011-06-01
When analyzed by standard statistical methods, the time series of the daily return of financial indices appear to behave as Markov random series with no apparent temporal order or memory. This empirical result seems to be counter intuitive since investor are influenced by both short and long term past market behaviors. Consequently much effort has been devoted to unveil hidden temporal order in the market dynamics. Here we show that temporal order is hidden in the series of the variance of the stocks volatility. First we show that the correlation between the variances of the daily returns and means of segments of these time series is very large and thus cannot be the output of random series, unless it has some temporal order in it. Next we show that while the temporal order does not show in the series of the daily return, rather in the variation of the corresponding volatility series. More specifically, we found that the behavior of the shuffled time series is equivalent to that of a random time series, while that of the original time series have large deviations from the expected random behavior, which is the result of temporal structure. We found the same generic behavior in 10 different stock markets from 7 different countries. We also present analysis of specially constructed sequences in order to better understand the origin of the observed temporal order in the market sequences. Each sequence was constructed from segments with equal number of elements taken from algebraic distributions of three different slopes.
Stochastic Mixing Model with Power Law Decay of Variance
NASA Technical Reports Server (NTRS)
Fedotov, S.; Ihme, M.; Pitsch, H.
2003-01-01
Here we present a simple stochastic mixing model based on the law of large numbers (LLN). The reason why the LLN is involved in our formulation of the mixing problem is that the random conserved scalar c = c(t,x(t)) appears to behave as a sample mean. It converges to the mean value mu, while the variance sigma(sup 2)(sub c) (t) decays approximately as t(exp -1). Since the variance of the scalar decays faster than a sample mean (typically is greater than unity), we will introduce some non-linear modifications into the corresponding pdf-equation. The main idea is to develop a robust model which is independent from restrictive assumptions about the shape of the pdf. The remainder of this paper is organized as follows. In Section 2 we derive the integral equation from a stochastic difference equation describing the evolution of the pdf of a passive scalar in time. The stochastic difference equation introduces an exchange rate gamma(sub n) which we model in a first step as a deterministic function. In a second step, we generalize gamma(sub n) as a stochastic variable taking fluctuations in the inhomogeneous environment into account. In Section 3 we solve the non-linear integral equation numerically and analyze the influence of the different parameters on the decay rate. The paper finishes with a conclusion.
Discordance of DNA Methylation Variance Between two Accessible Human Tissues
Jiang, Ruiwei; Jones, Meaghan J.; Chen, Edith; Neumann, Sarah M.; Fraser, Hunter B.; Miller, Gregory E.; Kobor, Michael S.
2015-01-01
Population epigenetic studies have been seeking to identify differences in DNA methylation between specific exposures, demographic factors, or diseases in accessible tissues, but relatively little is known about how inter-individual variability differs between these tissues. This study presents an analysis of DNA methylation differences between matched peripheral blood mononuclear cells (PMBCs) and buccal epithelial cells (BECs), the two most accessible tissues for population studies, in 998 promoter-located CpG sites. Specifically we compared probe-wise DNA methylation variance, and how this variance related to demographic factors across the two tissues. PBMCs had overall higher DNA methylation than BECs, and the two tissues tended to differ most at genomic regions of low CpG density. Furthermore, although both tissues showed appreciable probe-wise variability, the specific regions and magnitude of variability differed strongly between tissues. Lastly, through exploratory association analysis, we found indication of differential association of BEC and PBMC with demographic variables. The work presented here offers insight into variability of DNA methylation between individuals and across tissues and helps guide decisions on the suitability of buccal epithelial or peripheral mononuclear cells for the biological questions explored by epigenetic studies in human populations. PMID:25660083
PET image reconstruction: mean, variance, and optimal minimax criterion
NASA Astrophysics Data System (ADS)
Liu, Huafeng; Gao, Fei; Guo, Min; Xue, Liying; Nie, Jing; Shi, Pengcheng
2015-04-01
Given the noise nature of positron emission tomography (PET) measurements, it is critical to know the image quality and reliability as well as expected radioactivity map (mean image) for both qualitative interpretation and quantitative analysis. While existing efforts have often been devoted to providing only the reconstructed mean image, we present a unified framework for joint estimation of the mean and corresponding variance of the radioactivity map based on an efficient optimal min-max criterion. The proposed framework formulates the PET image reconstruction problem to be a transformation from system uncertainties to estimation errors, where the minimax criterion is adopted to minimize the estimation errors with possibly maximized system uncertainties. The estimation errors, in the form of a covariance matrix, express the measurement uncertainties in a complete way. The framework is then optimized by ∞-norm optimization and solved with the corresponding H∞ filter. Unlike conventional statistical reconstruction algorithms, that rely on the statistical modeling methods of the measurement data or noise, the proposed joint estimation stands from the point of view of signal energies and can handle from imperfect statistical assumptions to even no a priori statistical assumptions. The performance and accuracy of reconstructed mean and variance images are validated using Monte Carlo simulations. Experiments on phantom scans with a small animal PET scanner and real patient scans are also conducted for assessment of clinical potential.
On information loss in AdS3/CFT2
Fitzpatrick, A. Liam; Kaplan, Jared; Li, Daliang; ...
2016-05-18
We discuss information loss from black hole physics in AdS3, focusing on two sharp signatures infecting CFT2 correlators at large central charge c: ‘forbidden singularities’ arising from Euclidean-time periodicity due to the effective Hawking temperature, and late-time exponential decay in the Lorentzian region. We study an infinite class of examples where forbidden singularities can be resolved by non-perturbative effects at finite c, and we show that the resolution has certain universal features that also apply in the general case. Analytically continuing to the Lorentzian regime, we find that the non-perturbative effects that resolve forbidden singularities qualitatively change the behavior ofmore » correlators at times t ~SBH, the black hole entropy. This may resolve the exponential decay of correlators at late times in black hole backgrounds. By Borel resumming the 1/c expansion of exact examples, we explicitly identify ‘information-restoring’ effects from heavy states that should correspond to classical solutions in AdS3. Lastly, our results suggest a line of inquiry towards a more precise formulation of the gravitational path integral in AdS3.« less
Shock Wave Collisions and Thermalization in AdS_5
NASA Astrophysics Data System (ADS)
Kovchegov, Y. V.
We study heavy ion collisions at strong 't Hooft coupling usingAdS/CFT correspondence. According to the AdS/CFT dictionary heavy ion collisions correspond to gravitational shock wave collisions in AdS_5. We construct the metric in the forward light cone after the collision perturbatively through expansion of Einstein equations in graviton exchanges. We obtain an analytic expression for the metric including all-order graviton exchanges with one shock wave, while keeping the exchanges with another shock wave at the lowest order. We read off the corresponding energy-momentum tensor of the produced medium. Unfortunately this energy-momentum tensor does not correspond to ideal hydrodynamics, indicating that higher order graviton exchanges are needed to construct the full solution of the problem. We also show that shock waves must completely stop almost immediately after the collision in AdS_5, which, on the field theory side, corresponds to complete nuclear stopping due to strong coupling effects, likely leading to Landau hydrodynamics. Finally, we perform trapped surface analysis of the shock wave collisions demonstrating that a bulk black hole, corresponding to ideal hydrodynamics on the boundary, has to be created in such collisions, thus constructing a proof of thermalization in heavy ion collisions at strong coupling.
The generalized added mass revised
NASA Astrophysics Data System (ADS)
De Wilde, Juray
2007-05-01
The reformulation of the generalized or apparent added mass presented by De Wilde [Phys. Fluids 17, 113304 (2005)] neglects the presence of a drag-type force in the gas and solid phase momentum equations. Reformulating the generalized added mass accounting for the presence of a drag-type force, an apparent drag force appears next to the apparent distribution of the filtered gas phase pressure gradient over the phases already found by De Wilde in the above-cited reference. The reformulation of the generalized added mass and the evaluation of a linear wave propagation speed test then suggest a generalized added mass type closure approach to completely describe filtered gas-solid momentum transfer, that is, including both the filtered drag force and the correlation between the solid volume fraction and the gas phase pressure gradient.
ERIC Educational Resources Information Center
Liu, Duo; Chen, Xi; Chung, Kevin K. H.
2015-01-01
This study examined the relation between the performance in a visual search task and reading ability in 92 third-grade Hong Kong Chinese children. The visual search task, which is considered a measure of visual-spatial attention, accounted for unique variance in Chinese character reading after controlling for age, nonverbal intelligence,…
NASA Technical Reports Server (NTRS)
1980-01-01
The Ames-Dryden (AD)-1 was a research aircraft designed to investigate the concept of an oblique (or pivoting) wing. The movie clip runs about 17 seconds and has two air-to-air views of the AD-1. The first shot is from slightly above as the wing pivots to 60 degrees. The other angle is almost directly below the aircraft when the wing is fully pivoted.
Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction.
Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan
2017-02-27
Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 10(16) electrons/m²) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed
Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction
Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan
2017-01-01
Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed
Total synthesis of zyzzyanones A-D
Nadkarni, Dwayaja H.; Murugesan, Srinivasan
2013-01-01
Zyzzyanones A-D is a group of biologically active marine alkaloids isolated from Australian marine sponge Zyzzya fuliginosa. They contain a unique bispyrroloquinone ring system as the core structure. The first total synthesis of all four zyzzyanones is described here. The synthesis of these alkaloids started from a previously known 6-benzylamino indole-4,7-quinone derivative and involves 6–7 steps. The key step in the synthesis involves the construction of a pyrrole ring in one step using a Mn(OAc)3 mediated oxidative free radical cyclization reaction of a 6-benzylamino indole-4,7-quinone derivative with 4-benzyloxyphenyl acetaldehyde diethyl acetal in CH3CN. PMID:23956468
29 CFR 4204.11 - Variance of the bond/escrow and sale-contract requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... CORPORATION WITHDRAWAL LIABILITY FOR MULTIEMPLOYER PLANS VARIANCES FOR SALE OF ASSETS Variance of the... chapter to determine the date that an issuance under this subpart was provided. (Approved by the Office...
Profile Uniqueness in Student Ratings of Instruction.
ERIC Educational Resources Information Center
Weber, Larry J.; Frary, Robert B.
An approach to partitioning the variance in student ratings not previously reported in the literature is described. The new approach provides an alternative basis for interpreting faculty evaluations that overcome objections to current practices. Student responses on evaluation forms were cluster-analyzed to establish homogeneous subgroups of…
Estimating discharge measurement uncertainty using the interpolated variance estimator
Cohn, T.; Kiang, J.; Mason, R.
2012-01-01
Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.
Variance of indoor radon concentration: Major influencing factors.
Yarmoshenko, I; Vasilyev, A; Malinovsky, G; Bossew, P; Žunić, Z S; Onischenko, A; Zhukovsky, M
2016-01-15
Variance of radon concentration in dwelling atmosphere is analysed with regard to geogenic and anthropogenic influencing factors. Analysis includes review of 81 national and regional indoor radon surveys with varying sampling pattern, sample size and duration of measurements and detailed consideration of two regional surveys (Sverdlovsk oblast, Russia and Niška Banja, Serbia). The analysis of the geometric standard deviation revealed that main factors influencing the dispersion of indoor radon concentration over the territory are as follows: area of territory, sample size, characteristics of measurements technique, the radon geogenic potential, building construction characteristics and living habits. As shown for Sverdlovsk oblast and Niška Banja town the dispersion as quantified by GSD is reduced by restricting to certain levels of control factors. Application of the developed approach to characterization of the world population radon exposure is discussed.
Sources of variance of downwelling irradiance in water.
Gege, Peter; Pinnel, Nicole
2011-05-20
The downwelling irradiance in water is highly variable due to the focusing and defocusing of sunlight and skylight by the wave-modulated water surface. While the time scales and intensity variations caused by wave focusing are well studied, little is known about the induced spectral variability. Also, the impact of variations of sensor depth and inclination during the measurement on spectral irradiance has not been studied much. We have developed a model that relates the variance of spectral irradiance to the relevant parameters of the environmental and experimental conditions. A dataset from three German lakes was used to validate the model and to study the importance of each effect as a function of depth for the range of 0 to 5 m.
Analysis of variance of an underdetermined geodetic displacement problem
Darby, D.
1982-06-01
It has been suggested recently that point displacements in a free geodetic network traversing a strike-slip fault may be estimated from repeated surveys by minimizing only those displacement components normal to the strike. It is desirable to justify this procedure. We construct, from estimable quantities, a deformation parameter which is an F-statistic of the type occurring in the analysis of variance of linear models not of full rank. A test of its significance provides the criterion to justify the displacement solution. It is also interesting to study its behaviour as one varies the supposed strike of the fault. Justification of a displacement solution using data from a strike-slip fault is found, but not for data from a rift valley. The technique can be generalized to more complex patterns of deformation such as those expected near the end-zone of a fault in a dislocation model.
On computations of variance, covariance and correlation for interval data
NASA Astrophysics Data System (ADS)
Kishida, Masako
2017-02-01
In many practical situations, the data on which statistical analysis is to be performed is only known with interval uncertainty. Different combinations of values from the interval data usually lead to different values of variance, covariance, and correlation. Hence, it is desirable to compute the endpoints of possible values of these statistics. This problem is, however, NP-hard in general. This paper shows that the problem of computing the endpoints of possible values of these statistics can be rewritten as the problem of computing skewed structured singular values ν, for which there exist feasible (polynomial-time) algorithms that compute reasonably tight bounds in most practical cases. This allows one to find tight intervals of the aforementioned statistics for interval data.
Variance estimation for the Federal Waterfowl Harvest Surveys
Geissler, P.H.
1988-01-01
The Federal Waterfowl Harvest Surveys provide estimates of waterfowl harvest by species for flyways and states, harvests of most other migratory game bird species (by waterfowl hunters), crippling losses for ducks, geese, and coots, days hunted, and bag per hunter. The Waterfowl Hunter Questionnaire Survey separately estimates the harvest of ducks and geese using cluster samples of hunters who buy duck stamps at sample post offices. The Waterfowl Parts Collection estimates species, age, and sex ratios from parts solicited from successful hunters who responded to the Waterfowl Hunter Questionnaire Survey in previous years. These ratios are used to partition the duck and goose harvest into species, age, and sex specific harvest estimates. Annual estimates are correlated because successful hunters who respond to the Questionnaire Survey in one year may be asked to contribute to the Parts Collection for the next three years. Bootstrap variance estimates are used because covariances among years are difficult to estimate.
Correct use of repeated measures analysis of variance.
Park, Eunsik; Cho, Meehye; Ki, Chang-Seok
2009-02-01
In biomedical research, researchers frequently use statistical procedures such as the t-test, standard analysis of variance (ANOVA), or the repeated measures ANOVA to compare means between the groups of interest. There are frequently some misuses in applying these procedures since the conditions of the experiments or statistical assumptions necessary to apply these procedures are not fully taken into consideration. In this paper, we demonstrate the correct use of repeated measures ANOVA to prevent or minimize ethical or scientific problems due to its misuse. We also describe the appropriate use of multiple comparison tests for follow-up analysis in repeated measures ANOVA. Finally, we demonstrate the use of repeated measures ANOVA by using real data and the statistical software package SPSS (SPSS Inc., USA).
Objective Bayesian Comparison of Constrained Analysis of Variance Models.
Consonni, Guido; Paroli, Roberta
2016-10-04
In the social sciences we are often interested in comparing models specified by parametric equality or inequality constraints. For instance, when examining three group means [Formula: see text] through an analysis of variance (ANOVA), a model may specify that [Formula: see text], while another one may state that [Formula: see text], and finally a third model may instead suggest that all means are unrestricted. This is a challenging problem, because it involves a combination of nonnested models, as well as nested models having the same dimension. We adopt an objective Bayesian approach, requiring no prior specification from the user, and derive the posterior probability of each model under consideration. Our method is based on the intrinsic prior methodology, suitably modified to accommodate equality and inequality constraints. Focussing on normal ANOVA models, a comparative assessment is carried out through simulation studies. We also present an application to real data collected in a psychological experiment.
INTERPRETING MAGNETIC VARIANCE ANISOTROPY MEASUREMENTS IN THE SOLAR WIND
TenBarge, J. M.; Klein, K. G.; Howes, G. G.; Podesta, J. J.
2012-07-10
The magnetic variance anisotropy (A{sub m}) of the solar wind has been used widely as a method to identify the nature of solar wind turbulent fluctuations; however, a thorough discussion of the meaning and interpretation of the A{sub m} has not appeared in the literature. This paper explores the implications and limitations of using the A{sub m} as a method for constraining the solar wind fluctuation mode composition and presents a more informative method for interpreting spacecraft data. The paper also compares predictions of the A{sub m} from linear theory to nonlinear turbulence simulations and solar wind measurements. In both cases, linear theory compares well and suggests that the solar wind for the interval studied is dominantly Alfvenic in the inertial and dissipation ranges to scales of k{rho}{sub i} {approx_equal} 5.
Estimation of measurement variance in the context of environment statistics
NASA Astrophysics Data System (ADS)
Maiti, Pulakesh
2015-02-01
The object of environment statistics is for providing information on the environment, on its most important changes over time, across locations and identifying the main factors that influence them. Ultimately environment statistics would be required to produce higher quality statistical information. For this timely, reliable and comparable data are needed. Lack of proper and uniform definitions, unambiguous classifications pose serious problems to procure qualitative data. These cause measurement errors. We consider the problem of estimating measurement variance so that some measures may be adopted to improve upon the quality of data on environmental goods and services and on value statement in economic terms. The measurement technique considered here is that of employing personal interviewers and the sampling considered here is that of two-stage sampling.
A Posteriori Correction of Forecast and Observation Error Variances
NASA Technical Reports Server (NTRS)
Rukhovets, Leonid
2005-01-01
Proposed method of total observation and forecast error variance correction is based on the assumption about normal distribution of "observed-minus-forecast" residuals (O-F), where O is an observed value and F is usually a short-term model forecast. This assumption can be accepted for several types of observations (except humidity) which are not grossly in error. Degree of nearness to normal distribution can be estimated by the symmetry or skewness (luck of symmetry) a(sub 3) = mu(sub 3)/sigma(sup 3) and kurtosis a(sub 4) = mu(sub 4)/sigma(sup 4) - 3 Here mu(sub i) = i-order moment, sigma is a standard deviation. It is well known that for normal distribution a(sub 3) = a(sub 4) = 0.
A method for the microlensed flux variance of QSOs
NASA Astrophysics Data System (ADS)
Goodman, Jeremy; Sun, Ai-Lei
2014-06-01
A fast and practical method is described for calculating the microlensed flux variance of an arbitrary source by uncorrelated stars. The required inputs are the mean convergence and shear due to the smoothed potential of the lensing galaxy, the stellar mass function, and the absolute square of the Fourier transform of the surface brightness in the source plane. The mathematical approach follows previous authors but has been generalized, streamlined, and implemented in publicly available code. Examples of its application are given for Dexter and Agol's inhomogeneous-disc models as well as the usual Gaussian sources. Since the quantity calculated is a second moment of the magnification, it is only logarithmically sensitive to the sizes of very compact sources. However, for the inferred sizes of actual quasi-stellar objects (QSOs), it has some discriminatory power and may lend itself to simple statistical tests. At the very least, it should be useful for testing the convergence of microlensing simulations.
The use of analysis of variance procedures in biological studies
Williams, B.K.
1987-01-01
The analysis of variance (ANOVA) is widely used in biological studies, yet there remains considerable confusion among researchers about the interpretation of hypotheses being tested. Ambiguities arise when statistical designs are unbalanced, and in particular when not all combinations of design factors are represented in the data. This paper clarifies the relationship among hypothesis testing, statistical modelling and computing procedures in ANOVA for unbalanced data. A simple two-factor fixed effects design is used to illustrate three common parametrizations for ANOVA models, and some associations among these parametrizations are developed. Biologically meaningful hypotheses for main effects and interactions are given in terms of each parametrization, and procedures for testing the hypotheses are described. The standard statistical computing procedures in ANOVA are given along with their corresponding hypotheses. Throughout the development unbalanced designs are assumed and attention is given to problems that arise with missing cells.
From means and variances to persons and patterns
Grice, James W.
2015-01-01
A novel approach for conceptualizing and analyzing data from psychological studies is presented and discussed. This approach is centered on model building in an effort to explicate the structures and processes believed to generate a set of observations. These models therefore go beyond the variable-based, path models in use today which are limiting with regard to the types of inferences psychologists can draw from their research. In terms of analysis, the newer approach replaces traditional aggregate statistics such as means, variances, and covariances with methods of pattern detection and analysis. While these methods are person-centered and do not require parametric assumptions, they are both demanding and rigorous. They also provide psychologists with the information needed to draw the primary inference they often wish to make from their research; namely, the inference to best explanation. PMID:26257672
Hodological Resonance, Hodological Variance, Psychosis, and Schizophrenia: A Hypothetical Model
Birkett, Paul Brian Lawrie
2011-01-01
Schizophrenia is a disorder with a large number of clinical, neurobiological, and cognitive manifestations, none of which is invariably present. However it appears to be a single nosological entity. This article considers the likely characteristics of a pathology capable of such diverse consequences. It is argued that both deficit and psychotic symptoms can be manifestations of a single pathology. A general model of psychosis is proposed in which the informational sensitivity or responsivity of a network (“hodological resonance”) becomes so high that it activates spontaneously, to produce a hallucination, if it is in sensory cortex, or another psychotic symptom if it is elsewhere. It is argued that this can come about because of high levels of modulation such as those assumed present in affective psychosis, or because of high levels of baseline resonance, such as those expected in deafferentation syndromes associated with hallucinations, for example, Charles Bonnet. It is further proposed that schizophrenia results from a process (probably neurodevelopmental) causing widespread increases of variance in baseline resonance; consequently some networks possess high baseline resonance and become susceptible to spontaneous activation. Deficit symptoms might result from the presence of networks with increased activation thresholds. This hodological variance model is explored in terms of schizo-affective disorder, transient psychotic symptoms, diathesis-stress models, mechanisms of antipsychotic pharmacotherapy and persistence of genes predisposing to schizophrenia. Predictions and implications of the model are discussed. In particular it suggests a need for more research into psychotic states and for more single case-based studies in schizophrenia. PMID:21811475
Variance of the quantum dwell time for a nonrelativistic particle
Hahne, G. E.
2013-01-15
Munoz, Seidel, and Muga [Phys. Rev. A 79, 012108 (2009)], following an earlier proposal by Pollak and Miller [Phys. Rev. Lett. 53, 115 (1984)] in the context of a theory of a collinear chemical reaction, showed that suitable moments of a two-flux correlation function could be manipulated to yield expressions for the mean quantum dwell time and mean square quantum dwell time for a structureless particle scattering from a time-independent potential energy field between two parallel lines in a two-dimensional spacetime. The present work proposes a generalization to a charged, nonrelativistic particle scattering from a transient, spatially confined electromagnetic vector potential in four-dimensional spacetime. The geometry of the spacetime domain is that of the slab between a pair of parallel planes, in particular, those defined by constant values of the third (z) spatial coordinate. The mean Nth power, N= 1, 2, 3, Horizontal-Ellipsis , of the quantum dwell time in the slab is given by an expression involving an N-flux-correlation function. All these means are shown to be nonnegative. The N= 1 formula reduces to an S-matrix result published previously [G. E. Hahne, J. Phys. A 36, 7149 (2003)]; an explicit formula for N= 2, and of the variance of the dwell time in terms of the S-matrix, is worked out. A formula representing an incommensurability principle between variances of the output-minus-input flux of a pair of dynamical variables (such as the particle's time flux and others) is derived.
A variance-decomposition approach to investigating multiscale habitat associations
Lawler, J.J.; Edwards, T.C.
2006-01-01
The recognition of the importance of spatial scale in ecology has led many researchers to take multiscale approaches to studying habitat associations. However, few of the studies that investigate habitat associations at multiple spatial scales have considered the potential effects of cross-scale correlations in measured habitat variables. When cross-scale correlations in such studies are strong, conclusions drawn about the relative strength of habitat associations at different spatial scales may be inaccurate. Here we adapt and demonstrate an analytical technique based on variance decomposition for quantifying the influence of cross-scale correlations on multiscale habitat associations. We used the technique to quantify the variation in nest-site locations of Red-naped Sapsuckers (Sphyrapicus nuchalis) and Northern Flickers (Colaptes auratus) associated with habitat descriptors at three spatial scales. We demonstrate how the method can be used to identify components of variation that are associated only with factors at a single spatial scale as well as shared components of variation that represent cross-scale correlations. Despite the fact that no explanatory variables in our models were highly correlated (r < 0.60), we found that shared components of variation reflecting cross-scale correlations accounted for roughly half of the deviance explained by the models. These results highlight the importance of both conducting habitat analyses at multiple spatial scales and of quantifying the effects of cross-scale correlations in such analyses. Given the limits of conventional analytical techniques, we recommend alternative methods, such as the variance-decomposition technique demonstrated here, for analyzing habitat associations at multiple spatial scales. ?? The Cooper Ornithological Society 2006.
Water vapor variance measurements using a Raman lidar
NASA Technical Reports Server (NTRS)
Evans, K.; Melfi, S. H.; Ferrare, R.; Whiteman, D.
1992-01-01
Because of the importance of atmospheric water vapor variance, we have analyzed data from the NASA/Goddard Raman lidar to obtain temporal scales of water vapor mixing ratio as a function of altitude over observation periods extending to 12 hours. The ground-based lidar measures water vapor mixing ration from near the earth's surface to an altitude of 9-10 km. Moisture profiles are acquired once every minute with 75 m vertical resolution. Data at each 75 meter altitude level can be displayed as a function of time from the beginning to the end of an observation period. These time sequences have been spectrally analyzed using a fast Fourier transform technique. An example of such a temporal spectrum obtained between 00:22 and 10:29 UT on December 6, 1991 is shown in the figure. The curve shown on the figure represents the spectral average of data from 11 height levels centered on an altitude of 1 km (1 plus or minus .375 km). The spectra shows a decrease in energy density with frequency which generally follows a -5/3 power law over the spectral interval 3x10 (exp -5) to 4x10 (exp -3) Hz. The flattening of the spectrum for frequencies greater than 6x10 (exp -3) Hz is most likely a measure of instrumental noise. Spectra like that shown in the figure are calculated for other altitudes and show changes in spectral features with height. Spectral analysis versus height have been performed for several observation periods which demonstrate changes in water vapor mixing ratio spectral character from one observation period to the next. The combination of these temporal spectra with independent measurements of winds aloft provide an opportunity to infer spatial scales of moisture variance.
Euclidean and Noetherian entropies in AdS space
Dutta, Suvankar; Gopakumar, Rajesh
2006-08-15
We examine the Euclidean action approach, as well as that of Wald, to the entropy of black holes in asymptotically AdS spaces. From the point of view of holography these two approaches are somewhat complementary in spirit and it is not obvious why they should give the same answer in the presence of arbitrary higher derivative gravity corrections. For the case of the AdS{sub 5} Schwarzschild black hole, we explicitly study the leading correction to the Bekenstein-Hawking entropy in the presence of a variety of higher derivative corrections studied in the literature, including the Type IIB R{sup 4} term. We find a nontrivial agreement between the two approaches in every case. Finally, we give a general way of understanding the equivalence of these two approaches.
NASA Astrophysics Data System (ADS)
Accomazzi, Alberto; Kurtz, M. J.; Henneken, E. A.; Grant, C. S.; Thompson, D.; Di Milia, G.; Luker, J.; Murray, S. S.
2013-01-01
The NASA Astrophysics Data System (ADS) has been working hard on updating its services and interfaces to better support our community's research needs. ADS Labs is a new interface built on the old tried-and-true ADS Abstract Databases, so all of ADS's content is available through it. In this presentation we highlight the new features that have been developed in ADS Labs over the last year: new recommendations, metrics, a citation tool and enhanced fulltext search. ADS Labs has long been providing article-level recommendations based on keyword similarity, co-readership and co-citation analysis of its corpus. We have now introduced personal recommendations, which provide a list of articles to be considered based on a individual user's readership history. A new metrics interface provides a summary of the basic impact indicators for a list of records. These include the total and normalized number of papers, citations, reads, and downloads. Also included are some of the popular indices such as the h, g and i10 index. The citation helper tool allows one to submit a set of records and obtain a list of top 10 papers which cite and/or are cited by papers in the original list (but which are not in it). The process closely resembles the network approach of establishing "friends of friends" via an analysis of the citation network. The full-text search service now covers more than 2.5 million documents, including all the major astronomy journals, as well as physics journals published by Springer, Elsevier, the American Physical Society, the American Geophysical Union, and all of the arXiv eprints. The full-text search interface interface allows users and librarians to dig deep and find words or phrases in the body of the indexed articles. ADS Labs is available at http://adslabs.org
Heavy quark potential from deformed AdS5 models
NASA Astrophysics Data System (ADS)
Zhang, Zi-qiang; Hou, De-fu; Chen, Gang
2017-04-01
In this paper, we investigate the heavy quark potential in some holographic QCD models. The calculation relies on a modified renormalization scheme mentioned in a previous work of Albacete et al. After studying the heavy quark potential in Pirner-Galow model and Andreev-Zakharov model, we extend the discussion to a general deformed AdS5 case. It is shown that the obtained potential is negative definite for all quark-antiquark separations, differs from that using the usual renormalization scheme.
The AdS central charge in string theory
NASA Astrophysics Data System (ADS)
Troost, Jan
2011-11-01
We evaluate the vacuum expectation value of the central charge operator in string theory in an AdS3 vacuum. Our calculation provides a rare non-zero one-point function on a spherical worldsheet. The evaluation involves the regularization both of a worldsheet ultraviolet divergence (associated to the infinite volume of the conformal Killing group), and a space-time infrared divergence (corresponding to the infinite volume of space-time). The two divergences conspire to give a finite result, which is the classical general relativity value for the central charge, corrected in bosonic string theory by an infinite series of tree level higher derivative terms.
Internal structure of charged AdS black holes
NASA Astrophysics Data System (ADS)
Bhattacharjee, Srijit; Sarkar, Sudipta; Virmani, Amitabh
2016-06-01
When an electrically charged black hole is perturbed, its inner horizon becomes a singularity, often referred to as the Poisson-Israel mass inflation singularity. Ori constructed a model of this phenomenon for asymptotically flat black holes, in which the metric can be determined explicitly in the mass inflation region. In this paper we implement the Ori model for charged AdS black holes. We find that the mass function inflates faster than the flat space case as the inner horizon is approached. Nevertheless, the mass inflation singularity is still a weak singularity: Although spacetime curvature becomes infinite, tidal distortions remain finite on physical objects attempting to cross it.
AD performance and its extension towards ELENA
NASA Astrophysics Data System (ADS)
Oelert, Walter; Eriksson, Tommy; Belochitskii, Pavel; Tranquille, Gerard
2012-12-01
The CERN's Antiproton Decelerator (AD) is devoted to special experiments with low energy antiprotons. A main topic is the antihydrogen production with the present aim to produce these antimatter atoms with such low energy that they can be trapped in a magnetic gradient field. First very convincing results have been published recently by ALPHA. Still, it appears to be cumbersome, time consuming and ineffective when collecting the needed large numbers and high densities of antiproton clouds with the present AD. Both the effectiveness and the availability for additional experiments at this unique facility would drastically increase, if the antiproton beam of presently 5 MeV kinetic energy would be reduced by an additional decelerator to something like 100 keV. Such a facility "ELENA", as an abbreviation for Extra Low ENergy Antiproton Ring and first discussed in 1982 for LEAR, was freshly proposed with a substantial new design and revised layout and is presently under consideration. ELENA will increase the number of useful antiprotons by up to two orders of magnitude and will allow to serve up to four experiments in parallel.
AD performance and its extension towards ELENA
NASA Astrophysics Data System (ADS)
Oelert, Walter; Eriksson, Tommy; Belochitskii, Pavel; Tranquille, Gerard
The CERN's Antiproton Decelerator (AD) is devoted to special experiments with low energy antiprotons. A main topic is the antihydrogen production with the present aim to produce these antimatter atoms with such low energy that they can be trapped in a magnetic gradient field. First very convincing results have been published recently by ALPHA. Still, it appears to be cumbersome, time consuming and ineffective when collecting the needed large numbers and high densities of antiproton clouds with the present AD. Both the effectiveness and the availability for additional experiments at this unique facility would drastically increase, if the antiproton beam of presently 5 MeV kinetic energy would be reduced by an additional decelerator to something like 100 keV. Such a facility "ELENA", as an abbreviation for Extra Low ENergy Antiproton Ring and first discussed in 1982 for LEAR, was freshly proposed with a substantial new design and revised layout and is presently under consideration. ELENA will increase the number of useful antiprotons by up to two orders of magnitude and will allow to serve up to four experiments in parallel.
40 CFR 142.61 - Variances from the maximum contaminant level for fluoride.
Code of Federal Regulations, 2010 CFR
2010-07-01
... responsibility (primacy state) that issues variances shall require a community water system to install and/or use... (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION... application by a system for a variance, the Administrator or primacy state that issues variances...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-06
... Variances for Hazardous Selenium Bearing Waste AGENCY: Environmental Protection Agency (EPA). ACTION: Direct...-Bearing Waste II. Basis for This Determination III. Development of This Variance A. U.S. Ecology Nevada... from 0.16 mg/L to 5.7 mg/L TCLP. C. Site-Specific Treatment Variance for Selenium-Bearing Waste On...
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2011-01-01
Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…
NASA Astrophysics Data System (ADS)
Accomazzi, Alberto; Kurtz, M. J.; Henneken, E. A.; Grant, C. S.; Thompson, D.; Luker, J.; Chyla, R.; Murray, S. S.
2014-01-01
In the spring of 1993, the Smithsonian/NASA Astrophysics Data System (ADS) first launched its bibliographic search system. It was known then as the ADS Abstract Service, a component of the larger Astrophysics Data System effort which had developed an interoperable data system now seen as a precursor of the Virtual Observatory. As a result of the massive technological and sociological changes in the field of scholarly communication, the ADS is now completing the most ambitious technological upgrade in its twenty-year history. Code-named ADS 2.0, the new system features: an IT platform built on web and digital library standards; a new, extensible, industrial strength search engine; a public API with various access control capabilities; a set of applications supporting search, export, visualization, analysis; a collaborative, open source development model; and enhanced indexing of content which includes the full-text of astronomy and physics publications. The changes in the ADS platform affect all aspects of the system and its operations, including: the process through which data and metadata are harvested, curated and indexed; the interface and paradigm used for searching the database; and the follow-up analysis capabilities available to the users. This poster describes the choices behind the technical overhaul of the system, the technology stack used, and the opportunities which the upgrade is providing us with, namely gains in productivity and enhancements in our system capabilities.
Unique Challenges Testing SDRs for Space
NASA Technical Reports Server (NTRS)
Chelmins, David; Downey, Joseph A.; Johnson, Sandra K.; Nappier, Jennifer M.
2013-01-01
This paper describes the approach used by the Space Communication and Navigation (SCaN) Testbed team to qualify three Software Defined Radios (SDR) for operation in space and the characterization of the platform to enable upgrades on-orbit. The three SDRs represent a significant portion of the new technologies being studied on board the SCAN Testbed, which is operating on an external truss on the International Space Station (ISS). The SCaN Testbed provides experimenters an opportunity to develop and demonstrate experimental waveforms and applications for communication, networking, and navigation concepts and advance the understanding of developing and operating SDRs in space. Qualifying a Software Defined Radio for the space environment requires additional consideration versus a hardware radio. Tests that incorporate characterization of the platform to provide information necessary for future waveforms, which might exercise extended capabilities of the hardware, are needed. The development life cycle for the radio follows the software development life cycle, where changes can be incorporated at various stages of development and test. It also enables flexibility to be added with minor additional effort. Although this provides tremendous advantages, managing the complexity inherent in a software implementation requires a testing beyond the traditional hardware radio test plan. Due to schedule and resource limitations and parallel development activities, the subsystem testing of the SDRs at the vendor sites was primarily limited to typical fixed transceiver type of testing. NASA s Glenn Research Center (GRC) was responsible for the integration and testing of the SDRs into the SCaN Testbed system and conducting the investigation of the SDR to advance the technology to be accepted by missions. This paper will describe the unique tests that were conducted at both the subsystem and system level, including environmental testing, and present results. For example, test
Unique Challenges Testing SDRs for Space
NASA Technical Reports Server (NTRS)
Johnson, Sandra; Chelmins, David; Downey, Joseph; Nappier, Jennifer
2013-01-01
This paper describes the approach used by the Space Communication and Navigation (SCaN) Testbed team to qualify three Software Defined Radios (SDR) for operation in space and the characterization of the platform to enable upgrades on-orbit. The three SDRs represent a significant portion of the new technologies being studied on board the SCAN Testbed, which is operating on an external truss on the International Space Station (ISS). The SCaN Testbed provides experimenters an opportunity to develop and demonstrate experimental waveforms and applications for communication, networking, and navigation concepts and advance the understanding of developing and operating SDRs in space. Qualifying a Software Defined Radio for the space environment requires additional consideration versus a hardware radio. Tests that incorporate characterization of the platform to provide information necessary for future waveforms, which might exercise extended capabilities of the hardware, are needed. The development life cycle for the radio follows the software development life cycle, where changes can be incorporated at various stages of development and test. It also enables flexibility to be added with minor additional effort. Although this provides tremendous advantages, managing the complexity inherent in a software implementation requires a testing beyond the traditional hardware radio test plan. Due to schedule and resource limitations and parallel development activities, the subsystem testing of the SDRs at the vendor sites was primarily limited to typical fixed transceiver type of testing. NASA's Glenn Research Center (GRC) was responsible for the integration and testing of the SDRs into the SCaN Testbed system and conducting the investigation of the SDR to advance the technology to be accepted by missions. This paper will describe the unique tests that were conducted at both the subsystem and system level, including environmental testing, and present results. For example, test
Cantele, Francesca; Lanzavecchia, Salvatore; Bellon, Pier Luigi
2004-11-01
VIVA is a software library that obtains low-resolution models of icosahedral viruses from projections observed at the electron microscope. VIVA works in a fully automatic way without any initial model. This feature eliminates the possibility of bias that could originate from the alignment of the projections to an external preliminary model. VIVA determines the viewing direction of the virus images by computation of sets of single particle reconstruction (SPR) followed by a variance analysis and classification of the 3D models. All structures are reduced in size to speed up computation. This limits the resolution of a VIVA reconstruction. The models obtained can be subsequently refined at best with use of standard libraries. Up today, VIVA has successfully solved the structure of all viruses tested, some of which being considered refractory particles. The VIVA library is written in 'C' language and is devised to run on widespread Linux computers.
Frequency Controllable Metamaterial Absorber by an Added Dielectric Layer
NASA Astrophysics Data System (ADS)
Li, Xiong; Feng, Qin; Luo, Xiangang; Hong, Minghui
2011-03-01
In this paper, we introduce a covered dielectric layer in the traditional metamaterial absorber (MA) constructed by periodic resonant split rings. The absorber frequency can be simply controlled by the permittivity and the thickness of the added layer, without affecting the shape of the absorptivity spectrum. Furthermore, the dielectric loss property of the added layer does not influence the absorption characteristic obviously when the loss is not high. Based on these unique properties, a dynamically tunable MA can be realized by modulating a covered liquid dielectric layer.
Primordial fluctuations from complex AdS saddle points
Hertog, Thomas; Woerd, Ellen van der E-mail: ellen@itf.fys.kuleuven.be
2016-02-01
One proposal for dS/CFT is that the Hartle-Hawking (HH) wave function in the large volume limit is equal to the partition function of a Euclidean CFT deformed by various operators. All saddle points defining the semiclassical HH wave function in cosmology have a representation in which their interior geometry is part of a Euclidean AdS domain wall with complex matter fields. We compute the wave functions of scalar and tensor perturbations around homogeneous isotropic complex saddle points, turning on single scalar field matter only. We compare their predictions for the spectra of CMB perturbations with those of a different dS/CFT proposal based on the analytic continuation of inflationary universes to real asymptotically AdS domain walls. We find the predictions of both bulk calculations agree to first order in the slow roll parameters, but there is a difference at higher order which, we argue, is a signature of the HH state of the fluctuations.
Conserved charges in timelike warped AdS3 spaces
NASA Astrophysics Data System (ADS)
Donnay, L.; Fernández-Melgarejo, J. J.; Giribet, G.; Goya, A.; Lavia, E.
2015-06-01
We consider the timelike version of warped anti-de Sitter space (WAdS), which corresponds to the three-dimensional section of the Gödel solution of four-dimensional cosmological Einstein equations. This geometry presents closed timelike curves (CTCs), which are inherited from its four-dimensional embedding. In three dimensions, this type of solution can be supported without matter provided the graviton acquires mass. Here, among the different ways to consistently give mass to the graviton in three dimensions, we consider the parity-even model known as new massive gravity (NMG). In the bulk of timelike WAdS3 space, we introduce defects that, from the three-dimensional point of view, represent spinning massive particlelike objects. For this type of source, we investigate the definition of quasilocal gravitational energy as seen from infinity, far beyond the region where the CTCs appear. We also consider the covariant formalism applied to NMG to compute the mass and the angular momentum of spinning particlelike defects and compare the result with the one obtained by means of the quasilocal stress tensor. We apply these methods to special limits in which the WAdS3 solutions coincide with locally AdS3 and locally AdS2×R spaces. Finally, we make some comments about the asymptotic symmetry algebra of asymptotically WAdS3 spaces in NMG.
AdS nonlinear instability: moving beyond spherical symmetry
NASA Astrophysics Data System (ADS)
Dias, Óscar J. C.; Santos, Jorge E.
2016-12-01
Anti-de Sitter (AdS) is conjectured to be nonlinear unstable to a weakly turbulent mechanism that develops a cascade towards high frequencies, leading to black hole formation (Dafermos and Holzegel 2006 Seminar at DAMTP (University of Cambridge) available at https://dpmms.cam.ac.uk/~md384/ADSinstability.pdf, Bizon and Rostworowski 2011 Phys. Rev. Lett. 107 031102). We give evidence that the gravitational sector of perturbations behaves differently from the scalar one studied by Bizon and Rostworowski. In contrast with Bizon and Rostworowski, we find that not all gravitational normal modes of AdS can be nonlinearly extended into periodic horizonless smooth solutions of the Einstein equation. In particular, we show that even seeds with a single normal mode can develop secular resonances, unlike the spherically symmetric scalar field collapse studied by Bizon and Rostworowski. Moreover, if the seed has two normal modes, more than one resonance can be generated at third order, unlike the spherical collapse of Bizon and Rostworowski. We also show that weak turbulent perturbative theory predicts the existence of direct and inverse cascades, with the former dominating the latter for equal energy two-mode seeds.
Technology Transfer Automated Retrieval System (TEKTRAN)
UV spectral fingerprints, in combination with analysis of variance-principal components analysis (ANOVA-PCA), was used to identify sources of variance in 7 broccoli samples composed of two cultivars and seven different growing condition (four levels of Se irrigation, organic farming, and convention...
Strings on AdS wormholes and nonsingular black holes
NASA Astrophysics Data System (ADS)
Lü, H.; Vázquez-Poritz, Justin F.; Zhang, Zhibai
2015-01-01
Certain AdS black holes in the STU model can be conformally scaled to wormhole and black hole backgrounds which have two asymptotically AdS regions and are completely free of curvature singularities. While there is a delta-function source for the dilaton, classical string probes are not sensitive to this singularity. According to the AdS/CFT correspondence, the dual field theory lives on the union of the disjoint boundaries. For the wormhole background, causal contact exists between the two boundaries and the structure of certain correlation functions is indicative of an interacting phase for which there is a coupling between the degrees of freedom living at each boundary. The nonsingular black hole describes an entangled state in two non-interacting identical conformal field theories. By studying the behavior of open strings on these backgrounds, we extract a number of features of the ‘quarks’ and ‘anti-quarks’ that live in the field theories. In the interacting phase, we find that there is a maximum speed with which the quarks can move without losing energy, beyond which energy is transferred from a quark in one field theory to a quark in the other. We also compute the rate at which moving quarks within entangled states lose energy to the two surrounding plasmas. While a quark-antiquark pair within a single field theory exhibits Coulomb interaction for small separation, a quark in one field theory exhibits spring-like confinement with an anti-quark in the other field theory. For the entangled states, we study how the quark-antiquark screening length depends on temperature and chemical potential.
The association of Kienbock's disease and ulnar variance in the Iranian population.
Afshar, A; Aminzadeh-Gohari, A; Yekta, Z
2013-06-01
We retrospectively determined the distribution of ulnar variance in 60 patients with Kienböck's disease. We also measured the ulnar variances in 400 standard wrist radiographs in the normal adult population. The mean ulnar variance of the Kienböck's group was -1.1 mm (SD 1.7) and the mean ulnar variance of the general population was +0.7 (SD 1.5), which was significantly different. In the Kienböck's disease group there were 38 (63%) with ulnar negative, 16 (27%) neutral and six (10%) with ulnar positive variance. The preponderance of ulnar negative variance was statistically significant. There was an association between ulnar negative variance and the development of Kienböck's disease in this study.
Constructing Dense Graphs with Unique Hamiltonian Cycles
ERIC Educational Resources Information Center
Lynch, Mark A. M.
2012-01-01
It is not difficult to construct dense graphs containing Hamiltonian cycles, but it is difficult to generate dense graphs that are guaranteed to contain a unique Hamiltonian cycle. This article presents an algorithm for generating arbitrarily large simple graphs containing "unique" Hamiltonian cycles. These graphs can be turned into dense graphs…
Separating Growth from Value Added
ERIC Educational Resources Information Center
Yeagley, Raymond
2007-01-01
This article discusses Rochester's two academic models that offer different tools for different purposes--measuring individual learning and measuring what affects learning. The main focus of currently available growth measures is formative assessment--providing data to inform instructional planning. Value-added assessment is not a student…
Adding Value to Indiana's Commodities.
ERIC Educational Resources Information Center
Welch, Mary A., Ed.
1995-01-01
Food processing plants are adding value to bulk and intermediate products to sell overseas. The Asian Pacific Rim economies constituted the largest market for consumer food products in 1993. This shift toward consumer food imports in this area is due to more women working outside the home, the internationalization of populations, and dramatic…
Courtship American Style: Newspaper Ads
ERIC Educational Resources Information Center
Cameron, Catherine; And Others
1977-01-01
This study investigated an increasing social phenomenon--newspaper advertising for dating or marital partners--in terms of the bargaining process involved. Content analysis of personal ads in a popular "respectable" singles newspaper revealed a pattern of offers and requests reminiscent of a heterosexual stock market. (Author)
Beyond the GUM: variance-based sensitivity analysis in metrology
NASA Astrophysics Data System (ADS)
Lira, I.
2016-07-01
Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand.
Cosmic variance and the measurement of the local Hubble parameter.
Marra, Valerio; Amendola, Luca; Sawicki, Ignacy; Valkenburg, Wessel
2013-06-14
There is an approximately 9% discrepancy, corresponding to 2.4 σ, between two independent constraints on the expansion rate of the Universe: one indirectly arising from the cosmic microwave background and baryon acoustic oscillations and one more directly obtained from local measurements of the relation between redshifts and distances to sources. We argue that by taking into account the local gravitational potential at the position of the observer this tension--strengthened by the recent Planck results--is partially relieved and the concordance of the Standard Model of cosmology increased. We estimate that measurements of the local Hubble constant are subject to a cosmic variance of about 2.4% (limiting the local sample to redshifts z > 0.010) or 1.3% (limiting it to z > 0.023), a more significant correction than that taken into account already. Nonetheless, we show that one would need a very rare fluctuation to fully explain the offset in the Hubble rates. If this tension is further strengthened, a cosmology beyond the Standard Model may prove necessary.
Linear constraint minimum variance beamformer functional magnetic resonance inverse imaging.
Lin, Fa-Hsuan; Witzel, Thomas; Zeffiro, Thomas A; Belliveau, John W
2008-11-01
Accurate estimation of the timing of neural activity is required to fully model the information flow among functionally specialized regions whose joint activity underlies perception, cognition and action. Attempts to detect the fine temporal structure of task-related activity would benefit from functional imaging methods allowing higher sampling rates. Spatial filtering techniques have been used in magnetoencephalography source imaging applications. In this work, we use the linear constraint minimal variance (LCMV) beamformer localization method to reconstruct single-shot volumetric functional magnetic resonance imaging (fMRI) data using signals acquired simultaneously from all channels of a high density radio-frequency (RF) coil array. The LCMV beamformer method generalizes the existing volumetric magnetic resonance inverse imaging (InI) technique, achieving higher detection sensitivity while maintaining whole-brain spatial coverage and 100 ms temporal resolution. In this paper, we begin by introducing the LCMV reconstruction formulation and then quantitatively assess its performance using both simulated and empirical data. To demonstrate the sensitivity and inter-subject reliability of volumetric LCMV InI, we employ an event-related design to probe the spatial and temporal properties of task-related hemodynamic signal modulations in primary visual cortex. Compared to minimum-norm estimate (MNE) reconstructions, LCMV offers better localization accuracy and superior detection sensitivity. Robust results from both single subject and group analyses demonstrate the excellent sensitivity and specificity of volumetric InI in detecting the spatial and temporal structure of task-related brain activity.
Analysis of variance (ANOVA) models in lower extremity wounds.
Reed, James F
2003-06-01
Consider a study in which 2 new treatments are being compared with a control group. One way to compare outcomes would simply be to compare the 2 treatments with the control and the 2 treatments against each using 3 Student t tests (t test). If we were to compare 4 treatment groups, then we would need to use 6 t tests. The difficulty with using multiple t tests is that as the number of groups increases, so will the likelihood of finding a difference between any pair of groups simply by change when no real difference exists by definition a Type I error. If we were to perform 3 separate t tests each at alpha = .05, the experimental error rate increases to .14. As the number of multiple t tests increases, the experiment-wise error rate increases rather rapidly. The solution to the experimental error rate problem is to use analysis of variance (ANOVA) methods. Three basic ANOVA designs are reviewed that give hypothetical examples drawn from the literature to illustrate single-factor ANOVA, repeated measures ANOVA, and randomized block ANOVA. "No frills" SPSS or SAS code for each of these designs and examples used are available from the author on request.
A model selection approach to analysis of variance and covariance.
Alber, Susan A; Weiss, Robert E
2009-06-15
An alternative to analysis of variance is a model selection approach where every partition of the treatment means into clusters with equal value is treated as a separate model. The null hypothesis that all treatments are equal corresponds to the partition with all means in a single cluster. The alternative hypothesis correspond to the set of all other partitions of treatment means. A model selection approach can also be used for a treatment by covariate interaction, where the null hypothesis and each alternative correspond to a partition of treatments into clusters with equal covariate effects. We extend the partition-as-model approach to simultaneous inference for both treatment main effect and treatment interaction with a continuous covariate with separate partitions for the intercepts and treatment-specific slopes. The model space is the Cartesian product of the intercept partition and the slope partition, and we develop five joint priors for this model space. In four of these priors the intercept and slope partition are dependent. We advise on setting priors over models, and we use the model to analyze an orthodontic data set that compares the frictional resistance created by orthodontic fixtures.
Analysis of variance in neuroreceptor ligand imaging studies.
Ko, Ji Hyun; Reilhac, Anthonin; Ray, Nicola; Rusjan, Pablo; Bloomfield, Peter; Pellecchia, Giovanna; Houle, Sylvain; Strafella, Antonio P
2011-01-01
Radioligand positron emission tomography (PET) with dual scan paradigms can provide valuable insight into changes in synaptic neurotransmitter concentration due to experimental manipulation. The residual t-test has been utilized to improve the sensitivity of the t-test in PET studies. However, no further development of statistical tests using residuals has been proposed so far to be applied in cases when there are more than two conditions. Here, we propose the residual f-test, a one-way analysis of variance (ANOVA), and examine its feasibility using simulated [(11)C]raclopride PET data. We also re-visit data from our previously published [(11)C]raclopride PET study, in which 10 individuals underwent three PET scans under different conditions. We found that the residual f-test is superior in terms of sensitivity than the conventional f-test while still controlling for type 1 error. The test will therefore allow us to reliably test hypotheses in the smaller sample sizes often used in explorative PET studies.
Chromatic visualization of reflectivity variance within hybridized directional OCT images
NASA Astrophysics Data System (ADS)
Makhijani, Vikram S.; Roorda, Austin; Bayabo, Jan Kristine; Tong, Kevin K.; Rivera-Carpio, Carlos A.; Lujan, Brandon J.
2013-03-01
This study presents a new method of visualizing hybridized images of retinal spectral domain optical coherence tomography (SDOCT) data comprised of varied directional reflectivity. Due to the varying reflectivity of certain retinal structures relative to angle of incident light, SDOCT images obtained with differing entry positions result in nonequivalent images of corresponding cellular and extracellular structures, especially within layers containing photoreceptor components. Harnessing this property, cross-sectional pathologic and non-pathologic macular images were obtained from multiple pupil entry positions using commercially-available OCT systems, and custom segmentation, alignment, and hybridization algorithms were developed to chromatically visualize the composite variance of reflectivity effects. In these images, strong relative reflectivity from any given direction visualizes as relative intensity of its corresponding color channel. Evident in non-pathologic images was marked enhancement of Henle's fiber layer (HFL) visualization and varying reflectivity patterns of the inner limiting membrane (ILM) and photoreceptor inner/outer segment junctions (IS/OS). Pathologic images displayed similar and additional patterns. Such visualization may allow a more intuitive understanding of structural and physiologic processes in retinal pathologies.
Prediction of membrane protein types using maximum variance projection
NASA Astrophysics Data System (ADS)
Wang, Tong; Yang, Jie
2011-05-01
Predicting membrane protein types has a positive influence on further biological function analysis. To quickly and efficiently annotate the type of an uncharacterized membrane protein is a challenge. In this work, a system based on maximum variance projection (MVP) is proposed to improve the prediction performance of membrane protein types. The feature extraction step is based on a hybridization representation approach by fusing Position-Specific Score Matrix composition. The protein sequences are quantized in a high-dimensional space using this representation strategy. Some problems will be brought when analysing these high-dimensional feature vectors such as high computing time and high classifier complexity. To solve this issue, MVP, a novel dimensionality reduction algorithm is introduced by extracting the essential features from the high-dimensional feature space. Then, a K-nearest neighbour classifier is employed to identify the types of membrane proteins based on their reduced low-dimensional features. As a result, the jackknife and independent dataset test success rates of this model reach 86.1 and 88.4%, respectively, and suggest that the proposed approach is very promising for predicting membrane proteins types.
A fast minimum variance beamforming method using principal component analysis.
Kim, Kyuhong; Park, Suhyun; Kim, Jungho; Park, Sung-Bae; Bae, MooHo
2014-06-01
Minimum variance (MV) beamforming has been studied for improving the performance of a diagnostic ultrasound imaging system. However, it is not easy for the MV beamforming to be implemented in a real-time ultrasound imaging system because of the enormous amount of computation time associated with the covariance matrix inversion. In this paper, to address this problem, we propose a new fast MV beamforming method that almost optimally approximates the MV beamforming while reducing the computational complexity greatly through dimensionality reduction using principal component analysis (PCA). The principal components are estimated offline from pre-calculated conventional MV weights. Thus, the proposed method does not directly calculate the MV weights but approximates them by a linear combination of a few selected dominant principal components. The combinational weights are calculated in almost the same way as in MV beamforming, but in the transformed domain of beamformer input signal by the PCA, where the dimension of the transformed covariance matrix is identical to the number of some selected principal component vectors. Both computer simulation and experiment were carried out to verify the effectiveness of the proposed method with echo signals from simulation as well as phantom and in vivo experiments. It is confirmed that our method can reduce the dimension of the covariance matrix down to as low as 2 × 2 while maintaining the good image quality of MV beamforming.
Fast Minimum Variance Beamforming Based on Legendre Polynomials.
Bae, MooHo; Park, Sung Bae; Kwon, Sung Jae
2016-09-01
Currently, minimum variance beamforming (MV) is actively investigated as a method that can improve the performance of an ultrasound beamformer, in terms of the lateral and contrast resolution. However, this method has the disadvantage of excessive computational complexity since the inverse spatial covariance matrix must be calculated. Some noteworthy methods among various attempts to solve this problem include beam space adaptive beamforming methods and the fast MV method based on principal component analysis, which are similar in that the original signal in the element space is transformed to another domain using an orthonormal basis matrix and the dimension of the covariance matrix is reduced by approximating the matrix only with important components of the matrix, hence making the inversion of the matrix very simple. Recently, we proposed a new method with further reduced computational demand that uses Legendre polynomials as the basis matrix for such a transformation. In this paper, we verify the efficacy of the proposed method through Field II simulations as well as in vitro and in vivo experiments. The results show that the approximation error of this method is less than or similar to those of the above-mentioned methods and that the lateral response of point targets and the contrast-to-speckle noise in anechoic cysts are also better than or similar to those methods when the dimensionality of the covariance matrices is reduced to the same dimension.
Anatomically constrained minimum variance beamforming applied to EEG.
Murzin, Vyacheslav; Fuchs, Armin; Kelso, J A Scott
2011-10-01
Neural activity as measured non-invasively using electroencephalography (EEG) or magnetoencephalography (MEG) originates in the cortical gray matter. In the cortex, pyramidal cells are organized in columns and activated coherently, leading to current flow perpendicular to the cortical surface. In recent years, beamforming algorithms have been developed, which use this property as an anatomical constraint for the locations and directions of potential sources in MEG data analysis. Here, we extend this work to EEG recordings, which require a more sophisticated forward model due to the blurring of the electric current at tissue boundaries where the conductivity changes. Using CT scans, we create a realistic three-layer head model consisting of tessellated surfaces that represent the cerebrospinal fluid-skull, skull-scalp, and scalp-air boundaries. The cortical gray matter surface, the anatomical constraint for the source dipoles, is extracted from MRI scans. EEG beamforming is implemented on simulated sets of EEG data for three different head models: single spherical, multi-shell spherical, and multi-shell realistic. Using the same conditions for simulated EEG and MEG data, it is shown (and quantified by receiver operating characteristic analysis) that EEG beamforming detects radially oriented sources, to which MEG lacks sensitivity. By merging several techniques, such as linearly constrained minimum variance beamforming, realistic geometry forward solutions, and cortical constraints, we demonstrate it is possible to localize and estimate the dynamics of dipolar and spatially extended (distributed) sources of neural activity.
Osteotomy for Sigmoid Notch Obliquity and Ulnar Positive Variance
Dickson, Lisa M.; Tham, Stephen K. Y.
2014-01-01
Background Several causes of ulnar wrist pain have been described. One uncommon cause is ulnar carpal abutment associated with a notable distally facing sigmoid notch (reverse obliquity). Such an abnormality cannot be treated with ulnar shortening alone because it will result in incongruity of the distal radioulnar joint (DRUJ). Case Description A 23-year-old woman presented with ulnar wrist pain aggravated by forearm rotation. Ten years earlier she had sustained a distal radius fracture that was conservatively treated. Examination revealed mild tenderness at the DRUJ and decreased wrist flexion and grip strength on the affected side. Radiographic examination demonstrated 1 cm ulnar positive variance, ulnar styloid nonunion, and a 37° reverse obliquity of the sigmoid notch. The patient was treated with ulnar shortening and rotation sigmoid notch osteotomy to realign the sigmoid notch with the ulnar head. Literature Review Sigmoid notch incongruity is one of several causes of wrist pain after distal radius fracture. Traditional salvage options for DRUJ arthritis may result in loss of grip strength, painful ulnar shaft instability, or reossification and are not acceptable options in the young patient. Sigmoid notch osteotomy or osteoplasty have been described to correct the shape of the sigmoid notch in the axial plane. Clinical Relevance We report a coronal plane osteotomy of the sigmoid notch to treat reverse obliquity of the sigmoid notch associated with ulnar carpal abutment. The rotation osteotomy described is particularly useful for patients in whom a salvage procedure is not warranted. PMID:24533247
Cost/variance optimization for human exposure assessment studies.
Whitmore, Roy W; Pellizzari, Edo D; Zelon, Harvey S; Michael, Larry C; Quackenboss, James J
2005-11-01
The National Human Exposure Assessment Survey (NHEXAS) field study in EPA Region V (one of three NHEXAS field studies) provides extensive exposure data on a representative sample of 249 residents of the Great Lakes states. Concentration data were obtained for both metals and volatile organic compounds (VOCs) from multiple environmental media and from human biomarkers. A variance model for the logarithms of concentration measurements is used to define intraclass correlations between observations within primary sampling units (PSUs) (nominally counties) and within secondary sampling units (SSUs) (nominally Census blocks). A model for the total cost of the study is developed in terms of fixed costs and variable costs per PSU, SSU, and participant. Intraclass correlations are estimated for media and analytes with sufficient sample sizes. We demonstrate how the intraclass correlations and variable cost components can be used to determine the sample allocation that minimizes cost while achieving pre-specified precision constraints for future studies that monitor environmental concentrations and human exposures for metals and VOCs.
Neutrality and the Response of Rare Species to Environmental Variance
Benedetti-Cecchi, Lisandro; Bertocci, Iacopo; Vaselli, Stefano; Maggi, Elena; Bulleri, Fabio
2008-01-01
Neutral models and differential responses of species to environmental heterogeneity offer complementary explanations of species abundance distribution and dynamics. Under what circumstances one model prevails over the other is still a matter of debate. We show that the decay of similarity over time in rocky seashore assemblages of algae and invertebrates sampled over a period of 16 years was consistent with the predictions of a stochastic model of ecological drift at time scales larger than 2 years, but not at time scales between 3 and 24 months when similarity was quantified with an index that reflected changes in abundance of rare species. A field experiment was performed to examine whether assemblages responded neutrally or non-neutrally to changes in temporal variance of disturbance. The experimental results did not reject neutrality, but identified a positive effect of intermediate levels of environmental heterogeneity on the abundance of rare species. This effect translated into a marked decrease in the characteristic time scale of species turnover, highlighting the role of rare species in driving assemblage dynamics in fluctuating environments. PMID:18648545
Minding Impacting Events in a Model of Stochastic Variance
Duarte Queirós, Sílvio M.; Curado, Evaldo M. F.; Nobre, Fernando D.
2011-01-01
We introduce a generalization of the well-known ARCH process, widely used for generating uncorrelated stochastic time series with long-term non-Gaussian distributions and long-lasting correlations in the (instantaneous) standard deviation exhibiting a clustering profile. Specifically, inspired by the fact that in a variety of systems impacting events are hardly forgot, we split the process into two different regimes: a first one for regular periods where the average volatility of the fluctuations within a certain period of time is below a certain threshold, , and another one when the local standard deviation outnumbers . In the former situation we use standard rules for heteroscedastic processes whereas in the latter case the system starts recalling past values that surpassed the threshold. Our results show that for appropriate parameter values the model is able to provide fat tailed probability density functions and strong persistence of the instantaneous variance characterized by large values of the Hurst exponent (), which are ubiquitous features in complex systems. PMID:21483864
Lung vasculature imaging using speckle variance optical coherence tomography
NASA Astrophysics Data System (ADS)
Cua, Michelle; Lee, Anthony M. D.; Lane, Pierre M.; McWilliams, Annette; Shaipanich, Tawimas; MacAulay, Calum E.; Yang, Victor X. D.; Lam, Stephen
2012-02-01
Architectural changes in and remodeling of the bronchial and pulmonary vasculature are important pathways in diseases such as asthma, chronic obstructive pulmonary disease (COPD), and lung cancer. However, there is a lack of methods that can find and examine small bronchial vasculature in vivo. Structural lung airway imaging using optical coherence tomography (OCT) has previously been shown to be of great utility in examining bronchial lesions during lung cancer screening under the guidance of autofluorescence bronchoscopy. Using a fiber optic endoscopic OCT probe, we acquire OCT images from in vivo human subjects. The side-looking, circumferentially-scanning probe is inserted down the instrument channel of a standard bronchoscope and manually guided to the imaging location. Multiple images are collected with the probe spinning proximally at 100Hz. Due to friction, the distal end of the probe does not spin perfectly synchronous with the proximal end, resulting in non-uniform rotational distortion (NURD) of the images. First, we apply a correction algorithm to remove NURD. We then use a speckle variance algorithm to identify vasculature. The initial data show a vascaulture density in small human airways similar to what would be expected.
Bush, B.; Jenkin, T.; Lipowicz, D.; Arent, D. J.; Cooke, R.
2012-01-01
Does large scale penetration of renewable generation such as wind and solar power pose economic and operational burdens on the electricity system? A number of studies have pointed to the potential benefits of renewable generation as a hedge against the volatility and potential escalation of fossil fuel prices. Research also suggests that the lack of correlation of renewable energy costs with fossil fuel prices means that adding large amounts of wind or solar generation may also reduce the volatility of system-wide electricity costs. Such variance reduction of system costs may be of significant value to consumers due to risk aversion. The analysis in this report recognizes that the potential value of risk mitigation associated with wind generation and natural gas generation may depend on whether one considers the consumer's perspective or the investor's perspective and whether the market is regulated or deregulated. We analyze the risk and return trade-offs for wind and natural gas generation for deregulated markets based on hourly prices and load over a 10-year period using historical data in the PJM Interconnection (PJM) from 1999 to 2008. Similar analysis is then simulated and evaluated for regulated markets under certain assumptions.
An investigation of AdS2 backreaction and holography
NASA Astrophysics Data System (ADS)
Engelsöy, Julius; Mertens, Thomas G.; Verlinde, Herman
2016-07-01
We investigate a dilaton gravity model in AdS2 proposed by Almheiri and Polchinski [1] and develop a 1d effective description in terms of a dynamical boundary time with a Schwarzian derivative action. We show that the effective model is equivalent to a 1d version of Liouville theory, and investigate its dynamics and symmetries via a standard canonical framework. We include the coupling to arbitrary conformal matter and analyze the effective action in the presence of possible sources. We compute commutators of local operators at large time separation, and match the result with the time shift due to a gravitational shockwave interaction. We study a black hole evaporation process and comment on the role of entropy in this model.
Supersymmetry Properties of AdS Supergravity Backgrounds
NASA Astrophysics Data System (ADS)
Beck, Samuel; Gutowski, Jan; Papadopoulos, George
2017-01-01
Anti-de Sitter supergravity backgrounds are of particular interest in light of the AdS/CFT correspondence, which relates them to dual conformal field theories on the boundary of the anti-de Sitter space. We have investigated the forms of the supersymmetries these backgrounds preserve by solving the Killing spinor equations on the anti-de Sitter components of these spaces. We have found that a supersymmetric AdSn background necessarily preserves 2⌊n/2⌋ k supersymmetries for n <= 4 and 2 ⌊n/2 ⌋ + 1 k supersymmetries for 4 < n <= 7 , k ∈N> 0 . Additionally, we have found that the Killing spinors of each background are exactly the zeroes of a Dirac-like operator constructed from the Killing spinor equations.
The Massive Wave Equation in Asymptotically AdS Spacetimes
NASA Astrophysics Data System (ADS)
Warnick, C. M.
2013-07-01
We consider the massive wave equation on asymptotically AdS spaces. We show that the timelike F behaves like a finite timelike boundary, on which one may impose the equivalent of Dirichlet, Neumann or Robin conditions for a range of (negative) mass parameter which includes the conformally coupled case. We demonstrate well posedness for the associated initial-boundary value problems at the H 1 level of regularity. We also prove that higher regularity may be obtained, together with an asymptotic expansion for the field near F. The proofs rely on energy methods, tailored to the modified energy introduced by Breitenlohner and Freedman. We do not assume the spacetime is stationary, nor that the wave equation separates.
On jordanian deformations of AdS5 and supergravity
NASA Astrophysics Data System (ADS)
Hoare, Ben; van Tongeren, Stijn J.
2016-10-01
We consider various homogeneous Yang-Baxter deformations of the {{AdS}}5× {{{S}}}5 superstring that can be obtained from the η-deformed superstring and related models by singular boosts. The jordanian deformations we obtain in this way behave similarly to the η-deformed model with regard to supergravity: T dualizing the classical sigma model it is possible to find corresponding solutions of supergravity, which, however, have dilatons that prevent T dualizing back. Hence the backgrounds of these jordanian deformations are not solutions of supergravity. Still, they do satisfy a set of recently found modified supergravity equations which implies that the corresponding sigma models are scale invariant. The abelian models that we obtain by singular boosts do directly correspond to solutions of supergravity. In addition to our main results we consider contraction limits of our main example, which do correspond to supergravity solutions.
Aspects of warped AdS3/CFT2 correspondence
NASA Astrophysics Data System (ADS)
Chen, Bin; Zhang, Jia-Ju; Zhang, Jian-Dong; Zhong, De-Liang
2013-04-01
In this paper we apply the thermodynamics method to investigate the holographic pictures for the BTZ black hole, the spacelike and the null warped black holes in three-dimensional topologically massive gravity (TMG) and new massive gravity (NMG). Even though there are higher derivative terms in these theories, the thermodynamics method is still effective. It gives consistent results with the ones obtained by using asymptotical symmetry group (ASG) analysis. In doing the ASG analysis we develop a brute-force realization of the Barnich-Brandt-Compere formalism with Mathematica code, which also allows us to calculate the masses and the angular momenta of the black holes. In particular, we propose the warped AdS3/CFT2 correspondence in the new massive gravity, which states that quantum gravity in the warped spacetime could holographically dual to a two-dimensional CFT with {c_R}={c_L}=24 /{Gm{β^2√{{2( {21-4{β^2}} )}}}}.
Systematics of Coupling Flows in AdS Backgrounds
Goldberger, Walter D.; Rothstein, Ira Z.
2003-03-18
We give an effective field theory derivation, based on the running of Planck brane gauge correlators, of the large logarithms that arise in the predictions for low energy gauge couplings in compactified AdS}_5 backgrounds, including the one-loop effects of bulk scalars, fermions, and gauge bosons. In contrast to the case of charged scalars coupled to Abelian gauge fields that has been considered previously in the literature, the one-loop corrections are not dominated by a single 4D Kaluza-Klein mode. Nevertheless, in the case of gauge field loops, the amplitudes can be reorganized into a leading logarithmic contribution that is identical to the running in 4D non-Abelian gauge theory, and a term which is not logarithmically enhanced and is analogous to a two-loop effect in 4D. In a warped GUT model broken by the Higgs mechanism in the bulk,we show that the matching scale that appears in the large logarithms induced by the non-Abelian gauge fields is m_{XY}^2/k where m_{XY} is the bulk mass of the XY bosons and k is the AdS curvature. This is in contrast to the UV scale in the logarithmic contributions of scalars, which is simply the bulk mass m. Our results are summarized in a set of simple rules that can be applied to compute the leading logarithmic predictions for coupling constant relations within a given warped GUT model. We present results for both bulk Higgs and boundary breaking of the GUT gauge
Holography beyond conformal invariance and AdS isometry?
Barvinsky, A. O.
2015-03-15
We suggest that the principle of holographic duality be extended beyond conformal invariance and AdS isometry. Such an extension is based on a special relation between functional determinants of the operators acting in the bulk and on its boundary, provided that the boundary operator represents the inverse propagators of the theory induced on the boundary by the Dirichlet boundary value problem in the bulk spacetime. This relation holds for operators of a general spin-tensor structure on generic manifolds with boundaries irrespective of their background geometry and conformal invariance, and it apparently underlies numerous O(N{sup 0}) tests of the AdS/CFT correspondence, based on direct calculation of the bulk and boundary partition functions, Casimir energies, and conformal anomalies. The generalized holographic duality is discussed within the concept of the “double-trace” deformation of the boundary theory, which is responsible in the case of large-N CFT coupled to the tower of higher-spin gauge fields for the renormalization group flow between infrared and ultraviolet fixed points. Potential extension of this method beyond the one-loop order is also briefly discussed.
John, Samantha E.; Gurnani, Ashita S.; Bussell, Cara; Saurman, Jessica L.; Griffin, Jason W.; Gavett, Brandon E.
2016-01-01
Objective Two main approaches to the interpretation of cognitive test performance have been utilized for the characterization of disease: evaluating shared variance across tests, as with measures of severity, and evaluating the unique variance across tests, as with pattern and error analysis. Both methods provide necessary information, but the unique contributions of each are rarely considered. This study compares the two approaches on their ability to differentially diagnose with accuracy, while controlling for the influence of other relevant demographic and risk variables. Method Archival data requested from the NACC provided clinical diagnostic groups that were paired to one another through a genetic matching procedure. For each diagnostic pairing, two separate logistic regression models predicting clinical diagnosis were performed and compared on their predictive ability. The shared variance approach was represented through the latent phenotype δ, which served as the lone predictor in one set of models. The unique variance approach was represented through raw score values for the 12 neuropsychological test variables comprising δ, which served as the set of predictors in the second group of models. Results Examining the unique patterns of neuropsychological test performance across a battery of tests was the superior method of differentiating between competing diagnoses, and it accounted for 16-30% of the variance in diagnostic decision making. Conclusion Implications for clinical practice are discussed, including test selection and interpretation. PMID:27797542
Genetic Variance for Body Size in a Natural Population of Drosophila Buzzatii
Ruiz, A.; Santos, M.; Barbadilla, A.; Quezada-Diaz, J. E.; Hasson, E.; Fontdevila, A.
1991-01-01
Previous work has shown thorax length to be under directional selection in the Drosophila buzzatii population of Carboneras. In order to predict the genetic consequences of natural selection, genetic variation for this trait was investigated in two ways. First, narrow sense heritability was estimated in the laboratory F(2) generation of a sample of wild flies by means of the offspring-parent regression. A relatively high value, 0.59, was obtained. Because the phenotypic variance of wild flies was 7-9 times that of the flies raised in the laboratory, ``natural'' heritability may be estimated as one-seventh to one-ninth that value. Second, the contribution of the second and fourth chromosomes, which are polymorphic for paracentric inversions, to the genetic variance of thorax length was estimated in the field and in the laboratory. This was done with the assistance of a simple genetic model which shows that the variance among chromosome arrangements and the variance among karyotypes provide minimum estimates of the chromosome's contribution to the additive and genetic variances of the triat, respectively. In males raised under optimal conditions in the laboratory, the variance among second-chromosome karyotypes accounted for 11.43% of the total phenotypic variance and most of this variance was additive; by contrast, the contribution of the fourth chromosome was nonsignificant. The variance among second-chromosome karyotypes accounted for 1.56-1.78% of the total phenotypic variance in wild males and was nonsignificant in wild females. The variance among fourth chromosome karyotypes accounted for 0.14-3.48% of the total phenotypic variance in wild flies. At both chromosomes, the proportion of additive variance was higher in mating flies than in nonmating flies. PMID:1916242
Estimation of Model Error Variances During Data Assimilation
NASA Technical Reports Server (NTRS)
Dee, Dick
2003-01-01
Data assimilation is all about understanding the error characteristics of the data and models that are used in the assimilation process. Reliable error estimates are needed to implement observational quality control, bias correction of observations and model fields, and intelligent data selection. Meaningful covariance specifications are obviously required for the analysis as well, since the impact of any single observation strongly depends on the assumed structure of the background errors. Operational atmospheric data assimilation systems still rely primarily on climatological background error covariances. To obtain error estimates that reflect both the character of the flow and the current state of the observing system, it is necessary to solve three problems: (1) how to account for the short-term evolution of errors in the initial conditions; (2) how to estimate the additional component of error caused by model defects; and (3) how to compute the error reduction in the analysis due to observational information. Various approaches are now available that provide approximate solutions to the first and third of these problems. However, the useful accuracy of these solutions very much depends on the size and character of the model errors and the ability to account for them. Model errors represent the real-world forcing of the error evolution in a data assimilation system. Clearly, meaningful model error estimates and/or statistics must be based on information external to the model itself. The most obvious information source is observational, and since the volume of available geophysical data is growing rapidly, there is some hope that a purely statistical approach to model error estimation can be viable. This requires that the observation errors themselves are well understood and quantifiable. We will discuss some of these challenges and present a new sequential scheme for estimating model error variances from observations in the context of an atmospheric data
Local variance for multi-scale analysis in geomorphometry
Drăguţ, Lucian; Eisank, Clemens; Strasser, Thomas
2011-01-01
Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements. PMID:21779138
Low complex subspace minimum variance beamformer for medical ultrasound imaging.
Deylami, Ali Mohades; Asl, Babak Mohammadzadeh
2016-03-01
Minimum variance (MV) beamformer enhances the resolution and contrast in the medical ultrasound imaging at the expense of higher computational complexity with respect to the non-adaptive delay-and-sum beamformer. The major complexity arises from the estimation of the L×L array covariance matrix using spatial averaging, which is required to more accurate estimation of the covariance matrix of correlated signals, and inversion of it, which is required for calculating the MV weight vector which are as high as O(L(2)) and O(L(3)), respectively. Reducing the number of array elements decreases the computational complexity but degrades the imaging resolution. In this paper, we propose a subspace MV beamformer which preserves the advantages of the MV beamformer with lower complexity. The subspace MV neglects some rows of the array covariance matrix instead of reducing the array size. If we keep η rows of the array covariance matrix which leads to a thin non-square matrix, the weight vector of the subspace beamformer can be achieved in the same way as the MV obtains its weight vector with lower complexity as high as O(η(2)L). More calculations would be saved because an η×L covariance matrix must be estimated instead of a L×L. We simulated a wire targets phantom and a cyst phantom to evaluate the performance of the proposed beamformer. The results indicate that we can keep about 16 from 43 rows of the array covariance matrix which reduces the order of complexity to 14% while the image resolution is still comparable to that of the standard MV beamformer. We also applied the proposed method to an experimental RF data and showed that the subspace MV beamformer performs like the standard MV with lower computational complexity.
Auto-configuration protocols in mobile ad hoc networks.
Villalba, Luis Javier García; Matesanz, Julián García; Orozco, Ana Lucila Sandoval; Díaz, José Duván Márquez
2011-01-01
The TCP/IP protocol allows the different nodes in a network to communicate by associating a different IP address to each node. In wired or wireless networks with infrastructure, we have a server or node acting as such which correctly assigns IP addresses, but in mobile ad hoc networks there is no such centralized entity capable of carrying out this function. Therefore, a protocol is needed to perform the network configuration automatically and in a dynamic way, which will use all nodes in the network (or part thereof) as if they were servers that manage IP addresses. This article reviews the major proposed auto-configuration protocols for mobile ad hoc networks, with particular emphasis on one of the most recent: D2HCP. This work also includes a comparison of auto-configuration protocols for mobile ad hoc networks by specifying the most relevant metrics, such as a guarantee of uniqueness, overhead, latency, dependency on the routing protocol and uniformity.
Realizing "value-added" metrology
NASA Astrophysics Data System (ADS)
Bunday, Benjamin; Lipscomb, Pete; Allgair, John; Patel, Dilip; Caldwell, Mark; Solecky, Eric; Archie, Chas; Morningstar, Jennifer; Rice, Bryan J.; Singh, Bhanwar; Cain, Jason; Emami, Iraj; Banke, Bill, Jr.; Herrera, Alfredo; Ukraintsev, Vladamir; Schlessinger, Jerry; Ritchison, Jeff
2007-03-01
The conventional premise that metrology is a "non-value-added necessary evil" is a misleading and dangerous assertion, which must be viewed as obsolete thinking. Many metrology applications are key enablers to traditionally labeled "value-added" processing steps in lithography and etch, such that they can be considered integral parts of the processes. Various key trends in modern, state-of-the-art processing such as optical proximity correction (OPC), design for manufacturability (DFM), and advanced process control (APC) are based, at their hearts, on the assumption of fine-tuned metrology, in terms of uncertainty and accuracy. These trends are vehicles where metrology thus has large opportunities to create value through the engineering of tight and targetable process distributions. Such distributions make possible predictability in speed-sorts and in other parameters, which results in high-end product. Additionally, significant reliance has also been placed on defect metrology to predict, improve, and reduce yield variability. The necessary quality metrology is strongly influenced by not only the choice of equipment, but also the quality application of these tools in a production environment. The ultimate value added by metrology is a result of quality tools run by a quality metrology team using quality practices. This paper will explore the relationships among present and future trends and challenges in metrology, including equipment, key applications, and metrology deployment in the manufacturing flow. Of key importance are metrology personnel, with their expertise, practices, and metrics in achieving and maintaining the required level of metrology performance, including where precision, matching, and accuracy fit into these considerations. The value of metrology will be demonstrated to have shifted to "key enabler of large revenues," debunking the out-of-date premise that metrology is "non-value-added." Examples used will be from critical dimension (CD
Fitzpatrick, A.Liam; Kaplan, Jared; /SLAC
2012-02-14
We show that suitably regulated multi-trace primary states in large N CFTs behave like 'in' and 'out' scattering states in the flat-space limit of AdS. Their transition matrix elements approach the exact scattering amplitudes for the bulk theory, providing a natural CFT definition of the flat space S-Matrix. We study corrections resulting from the AdS curvature and particle propagation far from the center of AdS, and show that AdS simply provides an IR regulator that disappears in the flat space limit.
Dependence effects in unique signal transmission
Cooper, J.A.
1988-04-01
''Unique Signals'' are communicated from a source to a ''strong link'' safety device in a weapon by means of one or more digital communication channels. The probability that the expected unique signal pattern could be generated accidentally (e.g., due to an abnormal environment) would be an important measure. A probabilistic assessment of this likelihood is deceptive, because it depends on characteristics of the other traffic on the communication channel. One such characteristic that is frequently neglected in analysis is dependence. This report gives a mathematical model for dependence; cites some of the ways in which dependence can increase the likelihood of inadvertent unique signal pattern generation; and suggests that communicating each unique signal ''event'' at the highest level of protocol in the communication system minimizes dependence effects. 3 refs., 4 figs.
Parsa, Behnoosh; Zatsiorsky, Vladimir M; Latash, Mark L
2017-02-01
We address the nature of unintentional changes in performance in two papers. This second paper tested hypotheses related to stability of task-specific performance variables estimated using the framework of the uncontrolled manifold (UCM) hypothesis. Our first hypothesis was that selective stability of performance variables would be observed even when the magnitudes of those variables drifted unintentionally because of the lack of visual feedback. Our second hypothesis was that stability of a variable would depend on the number of explicit task constraints. Subjects performed four-finger isometric pressing tasks that required the accurate production of a combination of total moment and total force with natural or modified finger involvement under full visual feedback, which was removed later for some or all of the salient variables. We used inter-trial analysis of variance and drifts in the space of finger forces within the UCM and within the orthogonal to the UCM space. The two variance components were used to estimate a synergy index stabilizing the force/moment combination, while the two drift components were used to estimate motor equivalent and non-motor equivalent force changes, respectively. Without visual feedback, both force and moment drifted toward lower absolute magnitudes. The non-motor equivalent component of motion in the finger force space was larger than the motor equivalent component for variables that stopped receiving visual feedback. In contrast, variables that continued to receive visual feedback showed larger motor equivalent component, compared to non-motor equivalent component, over the same time interval. These data falsified the first hypothesis; indeed, selective stabilization of a variable over the duration of a trial allows expecting comparably large motor equivalent components both with and without visual feedback. Adding a new constraint (presented as a target magnitude of middle finger force) resulted in a drop in the synergy index
ADS Labs: Supporting Information Discovery in Science Education
NASA Astrophysics Data System (ADS)
Henneken, E. A.
2013-04-01
The SAO/NASA Astrophysics Data System (ADS) is an open access digital library portal for researchers in astronomy and physics, operated by the Smithsonian Astrophysical Observatory (SAO) under a NASA grant, successfully serving the professional science community for two decades. Currently there are about 55,000 frequent users (100+ queries per year), and up to 10 million infrequent users per year. Access by the general public now accounts for about half of all ADS use, demonstrating the vast reach of the content in our databases. The visibility and use of content in the ADS can be measured by the fact that there are over 17,000 links from Wikipedia pages to ADS content, a figure comparable to the number of links that Wikipedia has to OCLC's WorldCat catalog. The ADS, through its holdings and innovative techniques available in ADS Labs, offers an environment for information discovery that is unlike any other service currently available to the astrophysics community. Literature discovery and review are important components of science education, aiding the process of preparing for a class, project, or presentation. The ADS has been recognized as a rich source of information for the science education community in astronomy, thanks to its collaborations within the astronomy community, publishers and projects like ComPADRE. One element that makes the ADS uniquely relevant for the science education community is the availability of powerful tools to explore aspects of the astronomy literature as well as the relationship between topics, people, observations and scientific papers. The other element is the extensive repository of scanned literature, a significant fraction of which consists of historical literature.
Understanding the Unique Equatorial Density Irregularities
2015-04-01
monitoring devices. In addition, the Low Earth Orbiting (LEO) satellites ion density observations show unique features for the African sector [Hei et al. 2005...installed in Africa [Amory-Mazaudier, et al. 2009] since 2007. Alongside this activity, universities in Africa (e.g. Bahir Dar Uni- versity, Ethiopia...African sector, show unique equatorial iono- spheric structure [Hei et al. 2005]. For example, this region equatorial plasma bubbles, which produce
Senior, Alistair M; Gosby, Alison K; Lu, Jing; Simpson, Stephen J; Raubenheimer, David
2016-01-01
Meta-analysis, which drives evidence-based practice, typically focuses on the average response of subjects to a treatment. For instance in nutritional research the difference in average weight of participants on different diets is typically used to draw conclusions about the relative efficacy of interventions. As a result of their focus on the mean, meta-analyses largely overlook the effects of treatments on inter-subject variability. Recent tools from the study of biological evolution, where inter-individual variability is one of the key ingredients for evolution by natural selection, now allow us to study inter-subject variability using established meta-analytic models. Here we use meta-analysis to study how low carbohydrate (LC) ad libitum diets and calorie restricted diets affect variance in mass. We find that LC ad libitum diets may have a more variable outcome than diets that prescribe a reduced calorie intake. Our results suggest that whilst LC diets are effective in a large proportion of the population, for a subset of individuals, calorie restricted diets may be more effective. There is evidence that LC ad libitum diets rely on appetite suppression to drive weight loss. Extending this hypothesis, we suggest that between-individual variability in protein appetite may drive the trends that we report. A priori identification of an individual's target intake for protein may help define the most effective dietary intervention to prescribe for weight loss.
Senior, Alistair M.; Gosby, Alison K.; Lu, Jing; Simpson, Stephen J.; Raubenheimer, David
2016-01-01
Meta-analysis, which drives evidence-based practice, typically focuses on the average response of subjects to a treatment. For instance in nutritional research the difference in average weight of participants on different diets is typically used to draw conclusions about the relative efficacy of interventions. As a result of their focus on the mean, meta-analyses largely overlook the effects of treatments on inter-subject variability. Recent tools from the study of biological evolution, where inter-individual variability is one of the key ingredients for evolution by natural selection, now allow us to study inter-subject variability using established meta-analytic models. Here we use meta-analysis to study how low carbohydrate (LC) ad libitum diets and calorie restricted diets affect variance in mass. We find that LC ad libitum diets may have a more variable outcome than diets that prescribe a reduced calorie intake. Our results suggest that whilst LC diets are effective in a large proportion of the population, for a subset of individuals, calorie restricted diets may be more effective. There is evidence that LC ad libitum diets rely on appetite suppression to drive weight loss. Extending this hypothesis, we suggest that between-individual variability in protein appetite may drive the trends that we report. A priori identification of an individual’s target intake for protein may help define the most effective dietary intervention to prescribe for weight loss. PMID:27491895
A neural signature of the unique hues
Forder, Lewis; Bosten, Jenny; He, Xun; Franklin, Anna
2017-01-01
Since at least the 17th century there has been the idea that there are four simple and perceptually pure “unique” hues: red, yellow, green, and blue, and that all other hues are perceived as mixtures of these four hues. However, sustained scientific investigation has not yet provided solid evidence for a neural representation that separates the unique hues from other colors. We measured event-related potentials elicited from unique hues and the ‘intermediate’ hues in between them. We find a neural signature of the unique hues 230 ms after stimulus onset at a post-perceptual stage of visual processing. Specifically, the posterior P2 component over the parieto-occipital lobe peaked significantly earlier for the unique than for the intermediate hues (Z = −2.9, p = 0.004). Having identified a neural marker for unique hues, fundamental questions about the contribution of neural hardwiring, language and environment to the unique hues can now be addressed. PMID:28186142
ERIC Educational Resources Information Center
Heene, Moritz; Hilbert, Sven; Draxler, Clemens; Ziegler, Matthias; Buhner, Markus
2011-01-01
Fit indices are widely used in order to test the model fit for structural equation models. In a highly influential study, Hu and Bentler (1999) showed that certain cutoff values for these indices could be derived, which, over time, has led to the reification of these suggested thresholds as "golden rules" for establishing the fit or other aspects…
1991-03-01
Adjusted Estimators for Variance 1Redutilol in Computer Simutlation by Riichiardl L. R’ r March, 1991 D~issertation Advisor: Peter A.W. Lewis Approved for...OF NONLINEAR CONTROLS AND REGRESSION-ADJUSTED ESTIMATORS FOR VARIANCE REDUCTION IN COMPUTER SIMULATION 12. Personal Author(s) Richard L. Ressler 13a...necessary and identify by block number) This dissertation develops new techniques for variance reduction in computer simulation. It demonstrates that
Forsberg, Simon K. G.; Andreatta, Matthew E.; Huang, Xin-Yuan; Danku, John; Salt, David E.; Carlborg, Örjan
2015-01-01
Genome-wide association (GWA) analyses have generally been used to detect individual loci contributing to the phenotypic diversity in a population by the effects of these loci on the trait mean. More rarely, loci have also been detected based on variance differences between genotypes. Several hypotheses have been proposed to explain the possible genetic mechanisms leading to such variance signals. However, little is known about what causes these signals, or whether this genetic variance-heterogeneity reflects mechanisms of importance in natural populations. Previously, we identified a variance-heterogeneity GWA (vGWA) signal for leaf molybdenum concentrations in Arabidopsis thaliana. Here, fine-mapping of this association reveals that the vGWA emerges from the effects of three independent genetic polymorphisms that all are in strong LD with the markers displaying the genetic variance-heterogeneity. By revealing the genetic architecture underlying this vGWA signal, we uncovered the molecular source of a significant amount of hidden additive genetic variation or “missing heritability”. Two of the three polymorphisms underlying the genetic variance-heterogeneity are promoter variants for Molybdate transporter 1 (MOT1), and the third a variant located ~25 kb downstream of this gene. A fourth independent association was also detected ~600 kb upstream of MOT1. Use of a T-DNA knockout allele highlights Copper Transporter 6; COPT6 (AT2G26975) as a strong candidate gene for this association. Our results show that an extended LD across a complex locus including multiple functional alleles can lead to a variance-heterogeneity between genotypes in natural populations. Further, they provide novel insights into the genetic regulation of ion homeostasis in A. thaliana, and empirically confirm that variance-heterogeneity based GWA methods are a valuable tool to detect novel associations of biological importance in natural populations. PMID:26599497
Charged rotating AdS black holes with Chern-Simons coupling
NASA Astrophysics Data System (ADS)
Mir, Mozhgan; Mann, Robert B.
2017-01-01
We obtain a perturbative solution for rotating charged black holes in five-dimensional Einstein-Maxwell-Chern-Simons theory with a negative cosmological constant. We start from a small undeformed Kerr-AdS solution and use the electric charge as a perturbative parameter to build up black holes with equal-magnitude angular momenta up to fourth order. These black hole solutions are described by three parameters, the charge, horizon radius and horizon angular velocity. We determine the physical quantities of these black holes and study their dependence on the parameters of black holes and arbitrary Chern-Simons coefficient. In particular, for values of the CS coupling constant beyond its supergravity value, due to a rotational instability, counterrotating black holes arise. Also the rotating solutions appear to have vanishing angular momenta and are not manifest uniquely by their global charges.
Stability of charged global AdS4 spacetimes
NASA Astrophysics Data System (ADS)
Arias, Raúl; Mas, Javier; Serantes, Alexandre
2016-09-01
We study linear and nonlinear stability of asymptotically AdS4 solutions in Einstein-Maxwell-scalar theory. After summarizing the set of static solutions we first examine thermodynamical stability in the grand canonical ensemble and the phase transitions that occur among them. In the second part of the paper we focus on nonlinear stability in the microcanonical ensemble by evolving radial perturbations numerically. We find hints of an instability corner for vanishingly small perturbations of the same kind as the ones present in the uncharged case. Collapses are avoided, instead, if the charge and mass of the perturbations come to close the line of solitons. Finally we examine the soliton solutions. The linear spectrum of normal modes is not resonant and instability turns on at extrema of the mass curve. Linear stability extends to nonlinear stability up to some threshold for the amplitude of the perturbation. Beyond that, the soliton is destroyed and collapses to a hairy black hole. The relative width of this stability band scales down with the charge Q, and does not survive the blow up limit to a planar geometry.
AdS4/CFT3 squashed, stretched and warped
NASA Astrophysics Data System (ADS)
Klebanov, Igor R.; Klose, Thomas; Murugan, Arvind
2009-03-01
We use group theoretic methods to calculate the spectrum of short multiplets around the extremum of Script N = 8 gauged supergravity potential which possesses Script N = 2 supersymmetry and SU(3) global symmetry. Upon uplifting to M-theory, it describes a warped product of AdS4 and a certain squashed and stretched 7-sphere. We find quantum numbers in agreement with those of the gauge invariant operators in the Script N = 2 superconformal Chern-Simons theory recently proposed to be the dual of this M-theory background. This theory is obtained from the U(N) × U(N) theory through deforming the superpotential by a term quadratic in one of the superfields. To construct this model explicitly, one needs to employ monopole operators whose complete understanding is still lacking. However, for the U(2) × U(2) gauge theory we make a proposal for the form of the monopole operators which has a number of desired properties. In particular, this proposal implies enhanced symmetry of the U(2) × U(2) ABJM theory for k = 1,2; it makes its similarity to and subtle difference from the BLG theory quite explicit.
Wave variance partitioning in the trough of a barred beach
NASA Astrophysics Data System (ADS)
Howd, Peter A.; Oltman-Shay, Joan; Holman, Robert A.
1991-07-01
The wave-induced velocity field in the nearshore is composed of contributions from incident wind waves (ƒ > 0.05 Hz), surface infragravity waves (ƒ < 0.05 Hz, |κ| < (σ2/gβ) and shear waves (ƒ < 0.05 Hz, |κ| > σ2/gβ), where ƒ is the frequency, σ = 2πƒ, κ is the radial alongshore wavenumber (2π/L, L being the alongshore wavelength), β is the beach slope, and g is the acceleration due to gravity. Using an alongshore array of current meters located in the trough of a nearshore bar (mean depth ≈ 1.5 m), we investigate the bulk statistical behaviors of these wave bands over a wide range of incident wave conditions. The behavior of each contributing wave type is parameterized in terms of commonly measured or easily predicted variables describing the beach profile, wind waves, and current field. Over the 10-day period, the mean contributions (to the total variance) of the incident, infragravity, and shear wave bands were 71.5%, 14.3% and 13.6% for the alongshore component of flow (mean rms oscillations of 44, 20, and 19 cm s-1, respectively), and 81.9%, 10.9%, and 6.6% for the cross-shore component (mean rms oscillations of 92, 32, and 25 cm s-1, respectively). However, the values varied considerably. The contribution to the alongshore (cross-shore) component of flow ranged from 44.8-88.4% (58.5-95.8%) for the incident band, to 6.2-26.6% (2.5-32.4%) for the infragravity band, and 3.4-33.1% (0.6-14.3%) for the shear wave band. Incident wave oscillations were limited by depth-dependent saturation over the adjacent bar crest and varied only with the tide. The infragravity wave rms oscillations on this barred beach are best parameterized by the offshore wave height, consistent with previous studies on planar beaches. Comparison with data from four other beaches of widely differing geometries shows the shoreline infragravity amplitude to be a near-constant ratio of the offshore wave height. The magnitude of the ratio is found to be dependent on the Iribarren
Spatially variant regularization of lateral displacement measurement using variance.
Sumi, Chikayoshi; Itoh, Toshiki
2009-05-01
The purpose of this work is to confirm the effectiveness of our proposed spatially variant displacement component-dependent regularization for our previously developed ultrasonic two-dimensional (2D) displacement vector measurement methods, i.e., 2D cross-spectrum phase gradient method (CSPGM), 2D autocorrelation method (AM), and 2D Doppler method (DM). Generally, the measurement accuracy of lateral displacement spatially varies and the accuracy is lower than that of axial displacement that is accurate enough. This inaccurate measurement causes an instability in a 2D shear modulus reconstruction. Thus, the spatially variant lateral displacement regularization using the lateral displacement variance will be effective in obtaining an accurate lateral strain measurement and a stable shear modulus reconstruction than a conventional spatially uniform regularization. The effectiveness is verified through agar phantom experiments. The agar phantom [60mm (height) x 100 mm (lateral width) x 40 mm (elevational width)] that has, at a depth of 10mm, a circular cylindrical inclusion (dia.=10mm) of a higher shear modulus (2.95 and 1.43 x 10(6)N/m(2), i.e., relative shear modulus, 2.06) is compressed in the axial direction from the upper surface of the phantom using a commercial linear array type transducer that has a nominal frequency of 7.5-MHz. Because a contrast-to-noise ratio (CNR) expresses the detectability of the inhomogeneous region in the lateral strain image and further has almost the same sense as that of signal-to-noise ratio (SNR) for strain measurement, the obtained results show that the proposed spatially variant lateral displacement regularization yields a more accurate lateral strain measurement as well as a higher detectability in the lateral strain image (e.g., CNRs and SNRs for 2D CSPGM, 2.36 vs 2.27 and 1.74 vs 1.71, respectively). Furthermore, the spatially variant lateral displacement regularization yields a more stable and more accurate 2D shear modulus
Numerical Inversion with Full Estimation of Variance-Covariance Matrix
NASA Astrophysics Data System (ADS)
Saltogianni, Vasso; Stiros, Stathis
2016-04-01
-point, stochastic optimal solutions are computed as the center of gravity of these sets. A full Variance-Covariance Matrix (VCM) of each solution can be directly computed as second statistical moment. The overall method and the software have been tested with synthetic data (accuracy-oriented approach) in the modeling of magma chambers in the Santorini volcano and the modeling of double-fault earthquakes, i.e. to inversion problems with up to 18 unknowns.
Phonological processing is uniquely associated with neuro-metabolic concentration.
Bruno, Jennifer Lynn; Lu, Zhong-Lin; Manis, Franklin R
2013-02-15
Reading is a complex process involving recruitment and coordination of a distributed network of brain regions. The present study sought to establish a methodologically sound evidentiary base relating specific reading and phonological skills to neuro-metabolic concentration. Single voxel proton magnetic resonance spectroscopy was performed to measure metabolite concentration in a left hemisphere region around the angular gyrus for 31 young adults with a range of reading and phonological abilities. Correlation data demonstrated a significant negative association between phonological decoding and normalized choline concentration and as well as a trend toward a significant negative association between sight word reading and normalized choline concentration, indicating that lower scores on these measures are associated with higher concentrations of choline. Regression analyses indicated that choline concentration accounted for a unique proportion of variance in the phonological decoding measure after accounting for age, cognitive ability and sight word reading skill. This pattern of results suggests some specificity for the negative relationship between choline concentration and phonological decoding. To our knowledge, this is the first study to provide evidence that choline concentration in the angular region may be related to phonological skills independently of other reading skills, general cognitive ability, and age. These results may have important implications for the study and treatment of reading disability, a disorder which has been related to deficits in phonological decoding and abnormalities in the angular gyrus.
Negative ulnar variance is not a risk factor for Kienböck's disease.
D'Hoore, K; De Smet, L; Verellen, K; Vral, J; Fabry, G
1994-03-01
Ulnar variance was measured in standardized conditions in 125 normal wrists and in 52 patients with Kienböck's disease. No significant difference in ulnar variance between a sex/age-matched control group and a group of patients affected with Kienböck's disease was found. A positive correlation was found between age and ulnar variance. No significant difference was found between men and women. Based on these results, negative ulnar variance does not seem to be an important factor in the etiology of Kienböck's disease.
Allan, David W; Levine, Judah
2016-04-01
Over the past 50 years, variances have been developed for characterizing the instabilities of precision clocks and oscillators. These instabilities are often modeled as nonstationary processes, and the variances have been shown to be well-behaved and to be unbiased, efficient descriptors of these types of processes. This paper presents a historical overview of the development of these variances. The time-domain and frequency-domain formulations are presented and their development is described. The strengths and weaknesses of these characterization metrics are discussed. These variances are also shown to be useful in other applications, such as in telecommunication.
42 CFR 456.524 - Notification of Administrator's action and duration of variance.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan:...
42 CFR 456.524 - Notification of Administrator's action and duration of variance.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan:...
Doppler variance imaging for three-dimensional retina and choroid angiography
NASA Astrophysics Data System (ADS)
Yu, Lingfeng; Chen, Zhongping
2010-01-01
We demonstrate the use of Doppler variance (standard deviation) imaging for 3-D in vivo angiography in the human eye. In addition to the regular optical Doppler tomography velocity and structural images, we use the variance of blood flow velocity to map the retina and choroid vessels. Variance imaging is subject to bulk motion artifacts as in phase-resolved Doppler imaging, and a histogram-based method is proposed for bulk-motion correction in variance imaging. Experiments were performed to demonstrate the effectiveness of the proposed method for 3-D vasculature imaging of human retina and choroid.
Naragon-Gainey, Kristin; Gallagher, Matthew W.; Brown, Timothy A.
2013-01-01
A large body of research has found robust associations between dimensions of temperament (e.g., neuroticism, extraversion) and the mood and anxiety disorders. However, mood-state distortion (i.e., the tendency for current mood state to bias ratings of temperament) likely confounds these associations, rendering their interpretation and validity unclear. This issue is of particular relevance to clinical populations who experience elevated levels of general distress. The current study used the “trait-state-occasion” latent variable model (Cole, Martin, & Steiger, 2005) to separate the stable components of temperament from transient, situational influences such as current mood state. We examined the predictive power of the time-invariant components of temperament on the course of depression and social phobia in a large, treatment-seeking sample with mood and/or anxiety disorders (N = 826). Participants were assessed three times over the course of one year, using interview and self-report measures; most participants received treatment during this time. Results indicated that both neuroticism/behavioral inhibition (N/BI) and behavioral activation/positive affect (BA/P) consisted largely of stable, time-invariant variance (57% to 78% of total variance). Furthermore, the time-invariant components of N/BI and BA/P were uniquely and incrementally predictive of change in depression and social phobia, adjusting for initial symptom levels. These results suggest that the removal of state variance bolsters the effect of temperament on psychopathology among clinically distressed individuals. Implications for temperament-psychopathology models, psychopathology assessment, and the stability of traits are discussed. PMID:24016004
MRI-PET image fusion based on NSCT transform using local energy and local variance fusion rules.
Amini, Nasrin; Fatemizadeh, E; Behnam, Hamid
2014-05-01
Image fusion means to integrate information from one image to another image. Medical images according to the nature of the images are divided into structural (such as CT and MRI) and functional (such as SPECT, PET). This article fused MRI and PET images and the purpose is adding structural information from MRI to functional information of PET images. The images decomposed with Nonsubsampled Contourlet Transform and then two images were fused with applying fusion rules. The coefficients of the low frequency band are combined by a maximal energy rule and coefficients of the high frequency bands are combined by a maximal variance rule. Finally, visual and quantitative criteria were used to evaluate the fusion result. In visual evaluation the opinion of two radiologists was used and in quantitative evaluation the proposed fusion method was compared with six existing methods and used criteria were entropy, mutual information, discrepancy and overall performance.
Doppler Lidar Vertical Velocity Statistics Value-Added Product
Newsom, R. K.; Sivaraman, C.; Shippert, T. R.; Riihimaki, L. D.
2015-07-01
Accurate height-resolved measurements of higher-order statistical moments of vertical velocity fluctuations are crucial for improved understanding of turbulent mixing and diffusion, convective initiation, and cloud life cycles. The Atmospheric Radiation Measurement (ARM) Climate Research Facility operates coherent Doppler lidar systems at several sites around the globe. These instruments provide measurements of clear-air vertical velocity profiles in the lower troposphere with a nominal temporal resolution of 1 sec and height resolution of 30 m. The purpose of the Doppler lidar vertical velocity statistics (DLWSTATS) value-added product (VAP) is to produce height- and time-resolved estimates of vertical velocity variance, skewness, and kurtosis from these raw measurements. The VAP also produces estimates of cloud properties, including cloud-base height (CBH), cloud frequency, cloud-base vertical velocity, and cloud-base updraft fraction.
The inside outs of AdS3/CFT2: exact AdS wormholes with entangled CFT duals
NASA Astrophysics Data System (ADS)
Mandal, Gautam; Sinha, Ritam; Sorokhaibam, Nilakash
2015-01-01
We present the complete family of solutions of 3D gravity (Λ < 0) with two asymptotically AdS exterior regions. The solutions are constructed from data at the two boundaries, which correspond to two independent and arbitrary stress tensors T R , , and T L , . The two exteriors are smoothly joined on to an interior region through a regular horizon. We find CFT duals of these geometries which are entangled states of two CFT's. We compute correlators between general operators at the two boundaries and find perfect agreement between CFT and bulk calculations. We calculate and match the CFT entanglement entropy (EE) with the holographic EE which involves geodesics passing through the wormhole. We also compute a holographic, non-equilibrium entropy for the CFT using properties of the regular horizon. The construction of the bulk solutions here uses an exact version of Brown-Henneaux type diffeomorphisms which are asymptotically nontrivial and transform the CFT states by two independent unitary operators on the two sides. Our solutions provide an infinite family of explicit examples of the ER=EPR relation of Maldacena and Susskind [1].
Unique sugar metabolic pathways of bifidobacteria.
Fushinobu, Shinya
2010-01-01
Bifidobacteria have many beneficial effects for human health. The gastrointestinal tract, where natural colonization of bifidobacteria occurs, is an environment poor in nutrition and oxygen. Therefore, bifidobacteria have many unique glycosidases, transporters, and metabolic enzymes for sugar fermentation to utilize diverse carbohydrates that are not absorbed by host humans and animals. They have a unique, effective central fermentative pathway called bifid shunt. Recently, a novel metabolic pathway that utilizes both human milk oligosaccharides and host glycoconjugates was found. The galacto-N-biose/lacto-N-biose I metabolic pathway plays a key role in colonization in the infant gastrointestinal tract. These pathways involve many unique enzymes and proteins. This review focuses on their molecular mechanisms, as revealed by biochemical and crystallographic studies.
Lancastle, Deborah; Boivin, Jacky
2005-03-01
The aim of this study was to examine the unique and shared predictive power of psychological variables on reproductive physical health. Three months before fertility treatment, 97 women completed measures of dispositional optimism, trait anxiety, and coping. Information about biological response to treatment (e.g., estradiol level) was collected from medical charts after treatment. Structural equation modeling showed that measured psychological variables were all significant indicators of a single latent construct and that this construct was a better predictor of biological response to treatment than was any individual predictor. This research contributes to evidence suggesting that the health benefits of dispositional optimism are due to its shared variance with neuroticism.
Unique forbidden beta decays and neutrino mass
Dvornický, Rastislav; Šimkovic, Fedor
2015-10-28
The measurement of the electron energy spectrum in single β decays close to the endpoint provides a direct determination of the neutrino masses. The most sensitive experiments use β decays with low Q value, e.g. KATRIN (tritium) and MARE (rhenium). We present the theoretical spectral shape of electrons emitted in the first, second, and fourth unique forbidden β decays. Our findings show that the Kurie functions for these unique forbidden β transitions are linear in the limit of massless neutrinos like the Kurie function of the allowed β decay of tritium.
Transcriptomics exposes the uniqueness of parasitic plants.
Ichihashi, Yasunori; Mutuku, J Musembi; Yoshida, Satoko; Shirasu, Ken
2015-07-01
Parasitic plants have the ability to obtain nutrients directly from other plants, and several species are serious biological threats to agriculture by parasitizing crops of high economic importance. The uniqueness of parasitic plants is characterized by the presence of a multicellular organ called a haustorium, which facilitates plant-plant interactions, and shutting down or reducing their own photosynthesis. Current technical advances in next-generation sequencing and bioinformatics have allowed us to dissect the molecular mechanisms behind the uniqueness of parasitic plants at the genome-wide level. In this review, we summarize recent key findings mainly in transcriptomics that will give us insights into the future direction of parasitic plant research.
On uniqueness for frictional contact rate problems
NASA Astrophysics Data System (ADS)
Radi, E.; Bigoni, D.; Tralli, A.
1999-02-01
A linear elastic solid having part of the boundary in unilateral frictional contact witha stiffer constraint is considered. Bifurcations of the quasistatic velocity problem are analyzed,making use of methods developed for elastoplasticity. An exclusion principle for bifurcation isproposed which is similar, in essence, to the well-known exclusion principle given by Hill, 1958. Sufficient conditions for uniqueness are given for a broad class of contactconstitutive equations. The uniqueness criteria are based on the introduction of linear comparisoninterfaces defined both where the contact rate constitutive equation are piece-wise incrementallylinear and where these are thoroughly nonlinear. Structural examples are proposed which giveevidence to the applicability of the exclusion criteria.
Cahyadi, Muhammad; Park, Hee-Bok; Seo, Dong-Won; Jin, Shil; Choi, Nuri; Heo, Kang-Nyeong; Kang, Bo-Seok; Jo, Cheorun; Lee, Jun-Heon
2016-01-01
Quantitative trait locus (QTL) is a particular region of the genome containing one or more genes associated with economically important quantitative traits. This study was conducted to identify QTL regions for body weight and growth traits in purebred Korean native chicken (KNC). F1 samples (n = 595) were genotyped using 127 microsatellite markers and 8 single nucleotide polymorphisms that covered 2,616.1 centi Morgan (cM) of map length for 26 autosomal linkage groups. Body weight traits were measured every 2 weeks from hatch to 20 weeks of age. Weight of half carcass was also collected together with growth rate. A multipoint variance component linkage approach was used to identify QTLs for the body weight traits. Two significant QTLs for growth were identified on chicken chromosome 3 (GGA3) for growth 16 to18 weeks (logarithm of the odds [LOD] = 3.24, Nominal p value = 0.0001) and GGA4 for growth 6 to 8 weeks (LOD = 2.88, Nominal p value = 0.0003). Additionally, one significant QTL and three suggestive QTLs were detected for body weight traits in KNC; significant QTL for body weight at 4 weeks (LOD = 2.52, nominal p value = 0.0007) and suggestive QTL for 8 weeks (LOD = 1.96, Nominal p value = 0.0027) were detected on GGA4; QTLs were also detected for two different body weight traits: body weight at 16 weeks on GGA3 and body weight at 18 weeks on GGA19. Additionally, two suggestive QTLs for carcass weight were detected at 0 and 70 cM on GGA19. In conclusion, the current study identified several significant and suggestive QTLs that affect growth related traits in a unique resource pedigree in purebred KNC. This information will contribute to improving the body weight traits in native chicken breeds, especially for the Asian native chicken breeds. PMID:26732327
Bright, Molly G; Murphy, Kevin
2015-07-01
Noise correction is a critical step towards accurate mapping of resting state BOLD fMRI connectivity. Noise sources related to head motion or physiology are typically modelled by nuisance regressors, and a generalised linear model is applied to regress out the associated signal variance. In this study, we use independent component analysis (ICA) to characterise the data variance typically discarded in this pre-processing stage in a cohort of 12 healthy volunteers. The signal variance removed by 24, 12, 6, or only 3 head motion parameters demonstrated network structure typically associated with functional connectivity, and certain networks were discernable in the variance extracted by as few as 2 physiologic regressors. Simulated nuisance regressors, unrelated to the true data noise, also removed variance with network structure, indicating that any group of regressors that randomly sample variance may remove highly structured "signal" as well as "noise." Furthermore, to support this we demonstrate that random sampling of the original data variance continues to exhibit robust network structure, even when as few as 10% of the original volumes are considered. Finally, we examine the diminishing returns of increasing the number of nuisance regressors used in pre-processing, showing that excessive use of motion regressors may do little better than chance in removing variance within a functional network. It remains an open challenge to understand the balance between the benefits and confounds of noise correction using nuisance regressors.
ERIC Educational Resources Information Center
Abry, Tashia; Cash, Anne H.; Bradshaw, Catherine P.
2014-01-01
Generalizability theory (GT) offers a useful framework for estimating the reliability of a measure while accounting for multiple sources of error variance. The purpose of this study was to use GT to examine multiple sources of variance in and the reliability of school-level teacher and high school student behaviors as observed using the tool,…
29 CFR 1905.10 - Variances and other relief under section 6(b)(6)(A).
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 5 2010-07-01 2010-07-01 false Variances and other relief under section 6(b)(6)(A). 1905.10 Section 1905.10 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RULES OF PRACTICE FOR VARIANCES, LIMITATIONS, VARIATIONS, TOLERANCES, AND EXEMPTIONS UNDER THE...
ERIC Educational Resources Information Center
Penfield, Randall D.; Algina, James
2006-01-01
One approach to measuring unsigned differential test functioning is to estimate the variance of the differential item functioning (DIF) effect across the items of the test. This article proposes two estimators of the DIF effect variance for tests containing dichotomous and polytomous items. The proposed estimators are direct extensions of the…
Methods to Estimate the Between-Study Variance and Its Uncertainty in Meta-Analysis
ERIC Educational Resources Information Center
Veroniki, Areti Angeliki; Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian P. T.; Langan, Dean; Salanti, Georgia
2016-01-01
Meta-analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between-study variability, which is typically modelled using a between-study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between-study variance,…
Consistent Small-Sample Variances for Six Gamma-Family Measures of Ordinal Association
ERIC Educational Resources Information Center
Woods, Carol M.
2009-01-01
Gamma-family measures are bivariate ordinal correlation measures that form a family because they all reduce to Goodman and Kruskal's gamma in the absence of ties (1954). For several gamma-family indices, more than one variance estimator has been introduced. In previous research, the "consistent" variance estimator described by Cliff and…
29 CFR 1926.2 - Variances from safety and health standards.
Code of Federal Regulations, 2010 CFR
2010-07-01
...)(A) or 6(d) of the Williams-Steiger Occupational Safety and Health Act of 1970 (29 U.S.C. 65). The... for variances under the Williams-Steiger Occupational Safety and Health Act of 1970, and any requests for variances under Williams-Steiger Occupational Safety and Health Act with respect to...
75 FR 11147 - Process for Requesting a Variance From Vegetation Standards for Levees and Floodwalls
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-10
... Department of the Army, Corps of Engineers Process for Requesting a Variance From Vegetation Standards for... of Engineers (Corps), published its proposed update to its current process for requesting a variance... stated that written comments must be submitted on or before March 11, 2010. Instructions for...
29 CFR 1905.6 - Public notice of a granted variance, limitation, variation, tolerance, or exemption.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 5 2010-07-01 2010-07-01 false Public notice of a granted variance, limitation, variation..., VARIATIONS, TOLERANCES, AND EXEMPTIONS UNDER THE WILLIAMS-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 General § 1905.6 Public notice of a granted variance, limitation, variation, tolerance, or...
40 CFR 142.302 - Who can issue a small system variance?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Who can issue a small system variance? 142.302 Section 142.302 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small...
Ulnar variance and Kienböck's disease. An investigation in Taiwan.
Chen, W S; Shih, C H
1990-06-01
The correlation between negative ulnar variance and the occurrence of Kienböck's disease was evaluated in Taiwan. Two groups of subjects were studied. The first group consisted of 1000 normal subjects and the second of 18 patients with Kienböck's disease. Student's t-test was used to evaluate the significance of the difference between this and other published series. The mean was 0.313 mm in Group 1 and -1.222 mm in Group 2. The percentage with significant negative ulnar variance (the distal ulnar was at least 2 mm shorter than the radius) was 6.0% in Group 1 and 55.6% in Group 2. The difference between the two groups was significant. The mean ulnar variance of normal subjects in Taiwan differed significantly from the variance in Swedes and American blacks but not American whites. In Chinese patients with Kienböck's disease, the ulnar variance was predominantly negative, and the distribution of ulnar variance was similar to that of Swedish or American white patients. This study confirmed the association between negative ulnar variance and the occurrence of Kienböck's disease. This supports Hultén's hypothesis that negative ulnar variance may predispose certain individuals to the occurrence of Kienböck's disease.
ERIC Educational Resources Information Center
Fan, Weihua; Hancock, Gregory R.
2012-01-01
This study proposes robust means modeling (RMM) approaches for hypothesis testing of mean differences for between-subjects designs in order to control the biasing effects of nonnormality and variance inequality. Drawing from structural equation modeling (SEM), the RMM approaches make no assumption of variance homogeneity and employ robust…
Estimation of Additive, Dominance, and Imprinting Genetic Variance Using Genomic Data
Lopes, Marcos S.; Bastiaansen, John W. M.; Janss, Luc; Knol, Egbert F.; Bovenhuis, Henk
2015-01-01
Traditionally, exploration of genetic variance in humans, plants, and livestock species has been limited mostly to the use of additive effects estimated using pedigree data. However, with the development of dense panels of single-nucleotide polymorphisms (SNPs), the exploration of genetic variation of complex traits is moving from quantifying the resemblance between family members to the dissection of genetic variation at individual loci. With SNPs, we were able to quantify the contribution of additive, dominance, and imprinting variance to the total genetic variance by using a SNP regression method. The method was validated in simulated data and applied to three traits (number of teats, backfat, and lifetime daily gain) in three purebred pig populations. In simulated data, the estimates of additive, dominance, and imprinting variance were very close to the simulated values. In real data, dominance effects account for a substantial proportion of the total genetic variance (up to 44%) for these traits in these populations. The contribution of imprinting to the total phenotypic variance of the evaluated traits was relatively small (1–3%). Our results indicate a strong relationship between additive variance explained per chromosome and chromosome length, which has been described previously for other traits in other species. We also show that a similar linear relationship exists for dominance and imprinting variance. These novel results improve our understanding of the genetic architecture of the evaluated traits and shows promise to apply the SNP regression method to other traits and species, including human diseases. PMID:26438289
An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests
ERIC Educational Resources Information Center
Attali, Yigal
2010-01-01
Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…
Comment on a Wilcox Test Statistic for Comparing Means When Variances Are Unequal.
ERIC Educational Resources Information Center
Hsiung, Tung-Hsing; And Others
1994-01-01
The alternative proposed by Wilcox (1989) to the James second-order statistic for comparing population means when variances are heterogeneous can sometimes be invalid. The degree to which the procedure is invalid depends on differences in sample size, the expected values of the observations, and population variances. (SLD)
Methods to Estimate the Variance of Some Indices of the Signal Detection Theory: A Simulation Study
ERIC Educational Resources Information Center
Suero, Manuel; Privado, Jesús; Botella, Juan
2017-01-01
A simulation study is presented to evaluate and compare three methods to estimate the variance of the estimates of the parameters d and "C" of the signal detection theory (SDT). Several methods have been proposed to calculate the variance of their estimators, "d'" and "c." Those methods have been mostly assessed by…
Vitezica, Zulma G.; Varona, Luis; Legarra, Andres
2013-01-01
Genomic evaluation models can fit additive and dominant SNP effects. Under quantitative genetics theory, additive or “breeding” values of individuals are generated by substitution effects, which involve both “biological” additive and dominant effects of the markers. Dominance deviations include only a portion of the biological dominant effects of the markers. Additive variance includes variation due to the additive and dominant effects of the markers. We describe a matrix of dominant genomic relationships across individuals, D, which is similar to the G matrix used in genomic best linear unbiased prediction. This matrix can be used in a mixed-model context for genomic evaluations or to estimate dominant and additive variances in the population. From the “genotypic” value of individuals, an alternative parameterization defines additive and dominance as the parts attributable to the additive and dominant effect of the markers. This approach underestimates the additive genetic variance and overestimates the dominance variance. Transforming the variances from one model into the other is trivial if the distribution of allelic frequencies is known. We illustrate these results with mouse data (four traits, 1884 mice, and 10,946 markers) and simulated data (2100 individuals and 10,000 markers). Variance components were estimated correctly in the model, considering breeding values and dominance deviations. For the model considering genotypic values, the inclusion of dominant effects biased the estimate of additive variance. Genomic models were more accurate for the estimation of variance components than their pedigree-based counterparts. PMID:24121775
On Studying Common Factor Variance in Multiple-Component Measuring Instruments
ERIC Educational Resources Information Center
Raykov, Tenko; Pohl, Steffi
2013-01-01
A method for examining common factor variance in multiple-component measuring instruments is outlined. The procedure is based on an application of the latent variable modeling methodology and is concerned with evaluating observed variance explained by a global factor and by one or more additional component-specific factors. The approach furnishes…
42 CFR 488.64 - Remote facility variances for utilization review requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... PROCEDURES Special Requirements § 488.64 Remote facility variances for utilization review requirements. (a... conduct the utilization review required by § 405.1137 of this chapter or § 482.30 of this chapter, as... 42 Public Health 5 2011-10-01 2011-10-01 false Remote facility variances for utilization...
42 CFR 488.64 - Remote facility variances for utilization review requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... PROCEDURES Special Requirements § 488.64 Remote facility variances for utilization review requirements. (a... conduct the utilization review required by § 405.1137 of this chapter or § 482.30 of this chapter, as... 42 Public Health 5 2013-10-01 2013-10-01 false Remote facility variances for utilization...
42 CFR 488.64 - Remote facility variances for utilization review requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... PROCEDURES Special Requirements § 488.64 Remote facility variances for utilization review requirements. (a... conduct the utilization review required by § 405.1137 of this chapter or § 482.30 of this chapter, as... 42 Public Health 5 2012-10-01 2012-10-01 false Remote facility variances for utilization...
42 CFR 488.64 - Remote facility variances for utilization review requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... PROCEDURES Special Requirements § 488.64 Remote facility variances for utilization review requirements. (a... conduct the utilization review required by § 405.1137 of this chapter or § 482.30 of this chapter, as... 42 Public Health 5 2010-10-01 2010-10-01 false Remote facility variances for utilization...
Spatial variances of wind fields and their relation to second-order structure functions and spectra
NASA Astrophysics Data System (ADS)
Vogelzang, Jur; King, Gregory P.; Stoffelen, Ad
2015-02-01
Kinetic energy variance as a function of spatial scale for wind fields is commonly estimated either using second-order structure functions (in the spatial domain) or by spectral analysis (in the frequency domain). Both techniques give an order-of-magnitude estimate. More accurate estimates are given by a statistic called spatial variance. Spatial variances have a clear interpretation and are tolerant for missing data. They can be related to second-order structure functions, both for discrete and continuous data. Spatial variances can also be Fourier transformed to yield a relation with spectra. The flexibility of spatial variances is used to study various sampling strategies, and to compare them with second-order structure functions and spectral variances. It is shown that the spectral sampling strategy is not seriously biased to calm conditions for scatterometer ocean surface vector winds. When the second-order structure function behaves like rp, its ratio with the spatial variance equals >(p+1>)>(p+2>). Ocean surface winds in the tropics have p between 2/3 and 1, so one-sixth to one-fifth of the second-order structure function value is a good proxy for the cumulative variance.
40 CFR 142.302 - Who can issue a small system variance?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Who can issue a small system variance? 142.302 Section 142.302 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small...
40 CFR 142.302 - Who can issue a small system variance?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 23 2011-07-01 2011-07-01 false Who can issue a small system variance? 142.302 Section 142.302 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small...
Genetic and environmental heterogeneity of residual variance of weight traits in Nellore beef cattle
2012-01-01
Background Many studies have provided evidence of the existence of genetic heterogeneity of environmental variance, suggesting that it could be exploited to improve robustness and uniformity of livestock by selection. However, little is known about the perspectives of such a selection strategy in beef cattle. Methods A two-step approach was applied to study the genetic heterogeneity of residual variance of weight gain from birth to weaning and long-yearling weight in a Nellore beef cattle population. First, an animal model was fitted to the data and second, the influence of additive and environmental effects on the residual variance of these traits was investigated with different models, in which the log squared estimated residuals for each phenotypic record were analyzed using the restricted maximum likelihood method. Monte Carlo simulation was performed to assess the reliability of variance component estimates from the second step and the accuracy of estimated breeding values for residual variation. Results The results suggest that both genetic and environmental factors have an effect on the residual variance of weight gain from birth to weaning and long-yearling in Nellore beef cattle and that uniformity of these traits could be improved by selecting for lower residual variance, when considering a large amount of information to predict genetic merit for this criterion. Simulations suggested that using the two-step approach would lead to biased estimates of variance components, such that more adequate methods are needed to study the genetic heterogeneity of residual variance in beef cattle. PMID:22672564
Small Variance in Growth Rate in Annual Plants has Large Effects on Genetic Drift
Technology Transfer Automated Retrieval System (TEKTRAN)
When plant size is strongly correlated with plant reproduction, variance in growth rates results in a lognormal distribution of seed production within a population. Fecundity variance affects effective population size (Ne), which reflects the ability of a population to maintain beneficial mutations ...
AD, the ALICE diffractive detector
NASA Astrophysics Data System (ADS)
Tello, Abraham Villatoro
2017-03-01
ALICE is one of the four large experiments at the CERN Large Hadron Collider (LHC). As a complement to its Heavy-Ion physics program, ALICE started during Run 1 of LHC an extensive program dedicated to the study of proton-proton diffractive processes. In order to optimize its trigger efficiencies and purities in selecting diffractive events, the ALICE Collaboration installed a very forward AD detector during the Long Shut Down 1 of LHC. This new forward detector system consists of two stations made of two layers of scintillator pads, one station on each side of the interaction point. With this upgrade, ALICE has substantially increased its forward physics coverage, including the double rapidity gap based selection of central production, as well as the measurements of inclusive diffractive cross sections.