On the Relations among Regular, Equal Unique Variances, and Image Factor Analysis Models.
ERIC Educational Resources Information Center
Hayashi, Kentaro; Bentler, Peter M.
2000-01-01
Investigated the conditions under which the matrix of factor loadings from the factor analysis model with equal unique variances will give a good approximation to the matrix of factor loadings from the regular factor analysis model. Extends the results to the image factor analysis model. Discusses implications for practice. (SLD)
Adding a Parameter Increases the Variance of an Estimated Regression Function
ERIC Educational Resources Information Center
Withers, Christopher S.; Nadarajah, Saralees
2011-01-01
The linear regression model is one of the most popular models in statistics. It is also one of the simplest models in statistics. It has received applications in almost every area of science, engineering and medicine. In this article, the authors show that adding a predictor to a linear model increases the variance of the estimated regression…
Finger gnosis predicts a unique but small part of variance in initial arithmetic performance.
Wasner, Mirjam; Nuerk, Hans-Christoph; Martignon, Laura; Roesch, Stephanie; Moeller, Korbinian
2016-06-01
Recent studies indicated that finger gnosis (i.e., the ability to perceive and differentiate one's own fingers) is associated reliably with basic numerical competencies. In this study, we aimed at examining whether finger gnosis is also a unique predictor for initial arithmetic competencies at the beginning of first grade-and thus before formal math instruction starts. Therefore, we controlled for influences of domain-specific numerical precursor competencies, domain-general cognitive ability, and natural variables such as gender and age. Results from 321 German first-graders revealed that finger gnosis indeed predicted a unique and relevant but nevertheless only small part of the variance in initial arithmetic performance (∼1%-2%) as compared with influences of general cognitive ability and numerical precursor competencies. Taken together, these results substantiated the notion of a unique association between finger gnosis and arithmetic and further corroborate the theoretical idea of finger-based representations contributing to numerical cognition. However, the only small part of variance explained by finger gnosis seems to limit its relevance for diagnostic purposes. PMID:26895483
McLaughlin, Elizabeth N; Stewart, Sherry H; Taylor, Steven
2007-01-01
Anxiety sensitivity (AS) is an established cognitive risk factor for anxiety disorders. In children and adolescents, AS is usually measured with the Childhood Anxiety Sensitivity Index (CASI). Factor analytic studies suggest that the CASI is comprised of 3 lower-order factors pertaining to Physical, Psychological and Social Concerns. There has been little research on the validity of these lower-order factors. We examined the concurrent and incremental validity of the CASI and its lower-order factors in a non-clinical sample of 349 children and adolescents. CASI scores predicted symptoms of DSM-IV anxiety disorder subtypes as measured by the Spence Children's Anxiety Scale (SCAS) after accounting for variance due to State-Trait Anxiety Inventory scores. CASI Physical Concerns scores incrementally predicted scores on each of the SCAS scales, whereas scores on the Social and Psychological Concerns subscales incrementally predicted scores on conceptually related symptom scales (e.g. CASI Social Concerns scores predicted Social Phobia symptoms). Overall, this study demonstrates that there is added value in measuring AS factors in children and adolescents. PMID:18049946
Universal slow fall-off to the unique AdS infinity in Einstein-Gauss-Bonnet gravity
Maeda, Hideki
2008-08-15
In this paper, the following two propositions are proven under the dominant energy condition for the matter field in the higher-dimensional spherically symmetric spacetime in Einstein-Gauss-Bonnet gravity in the presence of a cosmological constant {lambda}. First, for {lambda}{<=}0 and {alpha}{>=}0 without a fine-tuning to give a unique anti-de Sitter (AdS) vacuum, where {alpha} is the Gauss-Bonnet coupling constant, vanishing generalized Misner-Sharp mass is equivalent to the maximally symmetric spacetime. Under the fine-tuning, it is equivalent to the vacuum class I spacetime. Second, under the fine-tuning with {alpha}>0, the asymptotically AdS spacetime in the higher-dimensional Henneaux-Teitelboim sense is only a special class of the vacuum class I spacetime. This means the universal slow fall-off to the unique AdS infinity in the presence of physically reasonable matter.
Schindler, Suzanne Elizabeth; Fagan, Anne M.
2015-01-01
Our understanding of the pathogenesis of Alzheimer disease (AD) has been greatly influenced by investigation of rare families with autosomal dominant mutations that cause early onset AD. Mutations in the genes coding for amyloid precursor protein (APP), presenilin 1 (PSEN-1), and presenilin 2 (PSEN-2) cause over-production of the amyloid-β peptide (Aβ) leading to early deposition of Aβ in the brain, which in turn is hypothesized to initiate a cascade of processes, resulting in neuronal death, cognitive decline, and eventual dementia. Studies of cerebrospinal fluid (CSF) from individuals with the common form of AD, late-onset AD (LOAD), have revealed that low CSF Aβ42 and high CSF tau are associated with AD brain pathology. Herein, we review the literature on CSF biomarkers in autosomal dominant AD (ADAD), which has contributed to a detailed road map of AD pathogenesis, especially during the preclinical period, prior to the appearance of any cognitive symptoms. Current drug trials are also taking advantage of the unique characteristics of ADAD and utilizing CSF biomarkers to accelerate development of effective therapies for AD. PMID:26175713
On the Uniqueness of Higher-Spin Symmetries in ADS and Cft
NASA Astrophysics Data System (ADS)
Boulanger, N.; Ponomarev, D.; Skvortsov, E.; Taronna, M.
2013-12-01
We study the uniqueness of higher-spin algebras which are at the core of higher-spin theories in AdS and of CFTs with exact higher-spin symmetry, i.e. conserved tensors of rank greater than two. The Jacobi identity for the gauge algebra is the simplest consistency test that appears at the quartic order for a gauge theory. Similarly, the algebra of charges in a CFT must also obey the Jacobi identity. These algebras are essentially the same. Solving the Jacobi identity under some simplifying assumptions listed out, we obtain that the Eastwood-Vasiliev algebra is the unique solution for d = 4 and d≥7. In 5d, there is a one-parameter family of algebras that was known before. In particular, we show that the introduction of a single higher-spin gauge field/current automatically requires the infinite tower of higher-spin gauge fields/currents. The result implies that from all the admissible non-Abelian cubic vertices in AdSd, that have been recently classified for totally symmetric higher-spin gauge fields, only one vertex can pass the Jacobi consistency test. This cubic vertex is associated with a gauge deformation that is the germ of the Eastwood-Vasiliev's higher-spin algebra.
Wilkinson, Eduan; Holzmayer, Vera; Jacobs, Graeme B.; de Oliveira, Tulio; Brennan, Catherine A.; Hackett, John; van Rensburg, Estrelita Janse
2015-01-01
Abstract By the end of 2012, more than 6.1 million people were infected with HIV-1 in South Africa. Subtype C was responsible for the majority of these infections and more than 300 near full-length genomes (NFLGs) have been published. Currently very few non-subtype C isolates have been identified and characterized within the country, particularly full genome non-C isolates. Seven patients from the Tygerberg Virology (TV) cohort were previously identified as possible non-C subtypes and were selected for further analyses. RNA was isolated from five individuals (TV047, TV096, TV101, TV218, and TV546) and DNA from TV016 and TV1057. The NFLGs of these samples were amplified in overlapping fragments and sequenced. Online subtyping tools REGA version 3 and jpHMM were used to screen for subtypes and recombinants. Maximum likelihood (ML) phylogenetic analysis (phyML) was used to infer subtypes and SimPlot was used to confirm possible intersubtype recombinants. We identified three subtype B (TV016, TV047, and TV1057) isolates, one subtype A1 (TV096), one subtype G (TV546), one unique AD (TV101), and one unique AC (TV218) recombinant form. This is the first NFLG of subtype G that has been described in South Africa. The subtype B sequences described also increased the NFLG subtype B sequences in Africa from three to six. There is a need for more NFLG sequences, as partial HIV-1 sequences may underrepresent viral recombinant forms. It is also necessary to continue monitoring the evolution and spread of HIV-1 in South Africa, because understanding viral diversity may play an important role in HIV-1 prevention strategies. PMID:25492033
Zhao, Yuhai; Pogue, Aileen I.; Lukiw, Walter J.
2015-01-01
Of the approximately ~2.65 × 103 mature microRNAs (miRNAs) so far identified in Homo sapiens, only a surprisingly small but select subset—about 35–40—are highly abundant in the human central nervous system (CNS). This fact alone underscores the extremely high selection pressure for the human CNS to utilize only specific ribonucleotide sequences contained within these single-stranded non-coding RNAs (ncRNAs) for productive miRNA–mRNA interactions and the down-regulation of gene expression. In this article we will: (i) consolidate some of our still evolving ideas concerning the role of miRNAs in the CNS in normal aging and in health, and in sporadic Alzheimer’s disease (AD) and related forms of chronic neurodegeneration; and (ii) highlight certain aspects of the most current work in this research field, with particular emphasis on the findings from our lab of a small pathogenic family of six inducible, pro-inflammatory, NF-κB-regulated miRNAs including miRNA-7, miRNA-9, miRNA-34a, miRNA-125b, miRNA-146a and miRNA-155. This group of six CNS-abundant miRNAs significantly up-regulated in sporadic AD are emerging as what appear to be key mechanistic contributors to the sporadic AD process and can explain much of the neuropathology of this common, age-related inflammatory neurodegeneration of the human CNS. PMID:26694372
Monte Carlo variance reduction
NASA Technical Reports Server (NTRS)
Byrn, N. R.
1980-01-01
Computer program incorporates technique that reduces variance of forward Monte Carlo method for given amount of computer time in determining radiation environment in complex organic and inorganic systems exposed to significant amounts of radiation.
Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A. E-mail: rix@mpia.de E-mail: janewman@pitt.edu
2011-04-20
Deep pencil beam surveys (<1 deg{sup 2}) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size {Delta}z. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , {Delta}z, and stellar mass m{sub *}. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates ({delta}{sigma}{sub v}/{sigma}{sub v}) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with {Delta}z = 0.5, the relative cosmic variance of galaxies with m{sub *}>10{sup 11} M{sub sun} is {approx}38%, while it is {approx}27% for GEMS and {approx}12% for COSMOS. For galaxies of m{sub *} {approx} 10{sup 10} M{sub sun}, the relative cosmic variance is {approx}19% for GOODS, {approx}13% for GEMS, and {approx}6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z
NASA Astrophysics Data System (ADS)
Moster, Benjamin P.; Somerville, Rachel S.; Newman, Jeffrey A.; Rix, Hans-Walter
2011-04-01
Deep pencil beam surveys (<1 deg2) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by "cosmic variance." This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift \\bar{z} and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, \\bar{z}, Δz, and stellar mass m *. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at \\bar{z}=2 and with Δz = 0.5, the relative cosmic variance of galaxies with m *>1011 M sun is ~38%, while it is ~27% for GEMS and ~12% for COSMOS. For galaxies of m * ~ 1010 M sun, the relative cosmic variance is ~19% for GOODS, ~13% for GEMS, and ~6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at \\bar{z}=2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic variance is
Getting around cosmic variance
Kamionkowski, M.; Loeb, A.
1997-10-01
Cosmic microwave background (CMB) anisotropies probe the primordial density field at the edge of the observable Universe. There is a limiting precision ({open_quotes}cosmic variance{close_quotes}) with which anisotropies can determine the amplitude of primordial mass fluctuations. This arises because the surface of last scatter (SLS) probes only a finite two-dimensional slice of the Universe. Probing other SLS{close_quote}s observed from different locations in the Universe would reduce the cosmic variance. In particular, the polarization of CMB photons scattered by the electron gas in a cluster of galaxies provides a measurement of the CMB quadrupole moment seen by the cluster. Therefore, CMB polarization measurements toward many clusters would probe the anisotropy on a variety of SLS{close_quote}s within the observable Universe, and hence reduce the cosmic-variance uncertainty. {copyright} {ital 1997} {ital The American Physical Society}
Videotape Project in Child Variance. Final Report.
ERIC Educational Resources Information Center
Morse, William C.; Smith, Judith M.
The design, production, dissemination, and evaluation of a series of videotaped training packages designed to enable teachers, parents, and paraprofessionals to interpret child variance in light of personal and alternative perspectives of behavior are discussed. The goal of each package is to highlight unique contributions of different theoretical…
Variance Anisotropy in Kinetic Plasmas
NASA Astrophysics Data System (ADS)
Parashar, Tulasi N.; Oughton, Sean; Matthaeus, William H.; Wan, Minping
2016-06-01
Solar wind fluctuations admit well-documented anisotropies of the variance matrix, or polarization, related to the mean magnetic field direction. Typically, one finds a ratio of perpendicular variance to parallel variance of the order of 9:1 for the magnetic field. Here we study the question of whether a kinetic plasma spontaneously generates and sustains parallel variances when initiated with only perpendicular variance. We find that parallel variance grows and saturates at about 5% of the perpendicular variance in a few nonlinear times irrespective of the Reynolds number. For sufficiently large systems (Reynolds numbers) the variance approaches values consistent with the solar wind observations.
Conversations across Meaning Variance
ERIC Educational Resources Information Center
Cordero, Alberto
2013-01-01
Progressive interpretations of scientific theories have long been denounced as naive, because of the inescapability of meaning variance. The charge reportedly applies to recent realist moves that focus on theory-parts rather than whole theories. This paper considers the question of what "theory-parts" of epistemic significance (if any) relevantly…
Minimum variance geographic sampling
NASA Technical Reports Server (NTRS)
Terrell, G. R. (Principal Investigator)
1980-01-01
Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.
ERIC Educational Resources Information Center
Braun, W. John
2012-01-01
The Analysis of Variance is often taught in introductory statistics courses, but it is not clear that students really understand the method. This is because the derivation of the test statistic and p-value requires a relatively sophisticated mathematical background which may not be well-remembered or understood. Thus, the essential concept behind…
Spectral Ambiguity of Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
1996-01-01
We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.
Nuclear Material Variance Calculation
1995-01-01
MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet that significantly reduces the effort required to make the variance and covariance calculations needed to determine the detection sensitivity of a materials accounting system and loss of special nuclear material (SNM). The user is required to enter information into one of four data tables depending on the type of term in the materials balance (MB) equation. The four data tables correspond to input transfers, output transfers,more » and two types of inventory terms, one for nondestructive assay (NDA) measurements and one for measurements made by chemical analysis. Each data entry must contain an identification number and a short description, as well as values for the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements during an accounting period. The user must also specify the type of error model (additive or multiplicative) associated with each measurement, and possible correlations between transfer terms. Predefined spreadsheet macros are used to perform the variance and covariance calculations for each term based on the corresponding set of entries. MAVARIC has been used for sensitivity studies of chemical separation facilities, fuel processing and fabrication facilities, and gas centrifuge and laser isotope enrichment facilities.« less
Biclustering with heterogeneous variance.
Chen, Guanhua; Sullivan, Patrick F; Kosorok, Michael R
2013-07-23
In cancer research, as in all of medicine, it is important to classify patients into etiologically and therapeutically relevant subtypes to improve diagnosis and treatment. One way to do this is to use clustering methods to find subgroups of homogeneous individuals based on genetic profiles together with heuristic clinical analysis. A notable drawback of existing clustering methods is that they ignore the possibility that the variance of gene expression profile measurements can be heterogeneous across subgroups, and methods that do not consider heterogeneity of variance can lead to inaccurate subgroup prediction. Research has shown that hypervariability is a common feature among cancer subtypes. In this paper, we present a statistical approach that can capture both mean and variance structure in genetic data. We demonstrate the strength of our method in both synthetic data and in two cancer data sets. In particular, our method confirms the hypervariability of methylation level in cancer patients, and it detects clearer subgroup patterns in lung cancer data. PMID:23836637
A proxy for variance in dense matching over homogeneous terrain
NASA Astrophysics Data System (ADS)
Altena, Bas; Cockx, Liesbet; Goedemé, Toon
2014-05-01
Automation in photogrammetry and avionics have brought highly autonomous UAV mapping solutions on the market. These systems have great potential for geophysical research, due to their mobility and simplicity of work. Flight planning can be done on site and orientation parameters are estimated automatically. However, one major drawback is still present: if contrast is lacking, stereoscopy fails. Consequently, topographic information cannot be obtained precisely through photogrammetry for areas with low contrast. Even though more robustness is added in the estimation through multi-view geometry, a precise product is still lacking. For the greater part, interpolation is applied over these regions, where the estimation is constrained by uniqueness, its epipolar line and smoothness. Consequently, digital surface models are generated with an estimate of the topography, without holes but also without an indication of its variance. Every dense matching algorithm is based on a similarity measure. Our methodology uses this property to support the idea that if only noise is present, no correspondence can be detected. Therefore, the noise level is estimated in respect to the intensity signal of the topography (SNR) and this ratio serves as a quality indicator for the automatically generated product. To demonstrate this variance indicator, two different case studies were elaborated. The first study is situated at an open sand mine near the village of Kiezegem, Belgium. Two different UAV systems flew over the site. One system had automatic intensity regulation, and resulted in low contrast over the sandy interior of the mine. That dataset was used to identify the weak estimations of the topography and was compared with the data from the other UAV flight. In the second study a flight campaign with the X100 system was conducted along the coast near Wenduine, Belgium. The obtained images were processed through structure-from-motion software. Although the beach had a very low
Systems Engineering Programmatic Estimation Using Technology Variance
NASA Technical Reports Server (NTRS)
Mog, Robert A.
2000-01-01
Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed on the subsystems and components comprising the system of interest. Technological "return" and "variation" parameters are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.
Systems Engineering Programmatic Estimation Using Technology Variance
NASA Technical Reports Server (NTRS)
Mog, Robert A.
2000-01-01
Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed oil the subsystems and components comprising the system of interest. Technological "returns" and "variation" parameters, are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.
Spectral variance of aeroacoustic data
NASA Technical Reports Server (NTRS)
Rao, K. V.; Preisser, J. S.
1981-01-01
An asymptotic technique for estimating the variance of power spectra is applied to aircraft flyover noise data. The results are compared with directly estimated variances and they are in reasonable agreement. The basic time series need not be Gaussian for asymptotic theory to apply. The asymptotic variance formulae can be useful tools both in the design and analysis phase of experiments of this type.
Cosmology without cosmic variance
Bernstein, Gary M.; Cai, Yan -Chuan
2011-10-01
The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing the number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.
Cosmology without cosmic variance
Bernstein, Gary M.; Cai, Yan -Chuan
2011-10-01
The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing themore » number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.« less
Budget variance analysis using RVUs.
Berlin, M F; Budzynski, M R
1998-01-01
This article details the use of the variance analysis as management tool to evaluate the financial health of the practice. A common financial tool for administrators has been a simple calculation measuring the difference between actual financials vs. budget financials. Standard cost accounting provides a methodology known as variance analysis to better understand the actual vs. budgeted financial streams. The standard variance analysis has been modified by applying relative value units (RVUs) as standards for the practice. PMID:10387247
Latitude dependence of eddy variances
NASA Technical Reports Server (NTRS)
Bowman, Kenneth P.; Bell, Thomas L.
1987-01-01
The eddy variance of a meteorological field must tend to zero at high latitudes due solely to the nature of spherical polar coordinates. The zonal averaging operator defines a length scale: the circumference of the latitude circle. When the circumference of the latitude circle is greater than the correlation length of the field, the eddy variance from transient eddies is the result of differences between statistically independent regions. When the circumference is less than the correlation length, the eddy variance is computed from points that are well correlated with each other, and so is reduced. The expansion of a field into zonal Fourier components is also influenced by the use of spherical coordinates. As is well known, a phenomenon of fixed wavelength will have different zonal wavenumbers at different latitudes. Simple analytical examples of these effects are presented along with an observational example from satellite ozone data. It is found that geometrical effects can be important even in middle latitudes.
Assessment of the genetic variance of late-onset Alzheimer's disease.
Ridge, Perry G; Hoyt, Kaitlyn B; Boehme, Kevin; Mukherjee, Shubhabrata; Crane, Paul K; Haines, Jonathan L; Mayeux, Richard; Farrer, Lindsay A; Pericak-Vance, Margaret A; Schellenberg, Gerard D; Kauwe, John S K
2016-05-01
Alzheimer's disease (AD) is a complex genetic disorder with no effective treatments. More than 20 common markers have been identified, which are associated with AD. Recently, several rare variants have been identified in Amyloid Precursor Protein (APP), Triggering Receptor Expressed On Myeloid Cells 2 (TREM2) and Unc-5 Netrin Receptor C (UNC5C) that affect risk for AD. Despite the many successes, the genetic architecture of AD remains unsolved. We used Genome-wide Complex Trait Analysis to (1) estimate phenotypic variance explained by genetics; (2) calculate genetic variance explained by known AD single nucleotide polymorphisms (SNPs); and (3) identify the genomic locations of variation that explain the remaining unexplained genetic variance. In total, 53.24% of phenotypic variance is explained by genetics, but known AD SNPs only explain 30.62% of the genetic variance. Of the unexplained genetic variance, approximately 41% is explained by unknown SNPs in regions adjacent to known AD SNPs, and the remaining unexplained genetic variance outside these regions. PMID:27036079
ERIC Educational Resources Information Center
UCLA IDEA, 2012
2012-01-01
Value added measures (VAM) uses changes in student test scores to determine how much "value" an individual teacher has "added" to student growth during the school year. Some policymakers, school districts, and educational advocates have applauded VAM as a straightforward measure of teacher effectiveness: the better a teacher, the better students…
The Variance Reaction Time Model
ERIC Educational Resources Information Center
Sikstrom, Sverker
2004-01-01
The variance reaction time model (VRTM) is proposed to account for various recognition data on reaction time, the mirror effect, receiver-operating-characteristic (ROC) curves, etc. The model is based on simple and plausible assumptions within a neural network: VRTM is a two layer neural network where one layer represents items and one layer…
Analysis of Variance: Variably Complex
ERIC Educational Resources Information Center
Drummond, Gordon B.; Vowler, Sarah L.
2012-01-01
These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution of…
Variance of a Few Observations
ERIC Educational Resources Information Center
Joarder, Anwar H.
2009-01-01
This article demonstrates that the variance of three or four observations can be expressed in terms of the range and the first order differences of the observations. A more general result, which holds for any number of observations, is also stated.
10 CFR 851.31 - Variance process.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Variance process. 851.31 Section 851.31 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application. Contractors desiring a variance from a safety and health standard, or portion thereof, may submit a...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Variances. 307.22 Section 307.22....22 Variances. EDA may approve variances to the requirements contained in this subpart, provided such variances: (a) Are consistent with the goals of the Economic Adjustment Assistance program and with an...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 7 2010-07-01 2010-07-01 false Variances. 1920.2 Section 1920.2 Labor Regulations Relating...' COMPENSATION ACT § 1920.2 Variances. (a) Variances from standards in parts 1915 through 1918 of this chapter may be granted in the same circumstances in which variances may be granted under sections 6(b)...
Measurement of Allan variance and phase noise at fractions of a millihertz
NASA Technical Reports Server (NTRS)
Conroy, Bruce L.; Le, Duc
1990-01-01
Although the measurement of Allan variance of oscillators is well documented, there is a need for a simplified system for finding the degradation of phase noise and Allan variance step-by-step through a system. This article describes an instrumentation system for simultaneous measurement of additive phase noise and degradation in Allan variance through a transmitter system. Also included are measurements of a 20-kW X-band transmitter showing the effect of adding a pass tube regulator.
Variance decomposition in stochastic simulators
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
NASA Astrophysics Data System (ADS)
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Estimating the Modified Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, Charles
1995-01-01
The third-difference approach to modified Allan variance (MVAR) leads to a tractable formula for a measure of MVAR estimator confidence, the equivalent degrees of freedom (edf), in the presence of power-law phase noise. The effect of estimation stride on edf is tabulated. A simple approximation for edf is given, and its errors are tabulated. A theorem allowing conservative estimates of edf in the presence of compound noise processes is given.
Neutrino mass without cosmic variance
NASA Astrophysics Data System (ADS)
LoVerde, Marilena
2016-05-01
Measuring the absolute scale of the neutrino masses is one of the most exciting opportunities available with near-term cosmological data sets. Two quantities that are sensitive to neutrino mass, scale-dependent halo bias b (k ) and the linear growth parameter f (k ) inferred from redshift-space distortions, can be measured without cosmic variance. Unlike the amplitude of the matter power spectrum, which always has a finite error, the error on b (k ) and f (k ) continues to decrease as the number density of tracers increases. This paper presents forecasts for statistics of galaxy and lensing fields that are sensitive to neutrino mass via b (k ) and f (k ). The constraints on neutrino mass from the auto- and cross-power spectra of spectroscopic and photometric galaxy samples are weakened by scale-dependent bias unless a very high density of tracers is available. In the high-density limit, using multiple tracers allows cosmic variance to be beaten, and the forecasted errors on neutrino mass shrink dramatically. In practice, beating the cosmic-variance errors on neutrino mass with b (k ) will be a challenge, but this signal is nevertheless a new probe of neutrino effects on structure formation that is interesting in its own right.
Partitioning Predicted Variance into Constituent Parts: A Primer on Regression Commonality Analysis.
ERIC Educational Resources Information Center
Amado, Alfred J.
Commonality analysis is a method of decomposing the R squared in a multiple regression analysis into the proportion of explained variance of the dependent variable associated with each independent variable uniquely and the proportion of explained variance associated with the common effects of one or more independent variables in various…
ERIC Educational Resources Information Center
Orsini, Larry L.; Hudack, Lawrence R.; Zekan, Donald L.
1999-01-01
The value-added statement (VAS), relatively unknown in the United States, is used in financial reports by many European companies. Saint Bonaventure University (New York) has adapted a VAS to make it appropriate for not-for-profit universities by identifying stakeholder groups (students, faculty, administrators/support personnel, creditors, the…
A Wavelet Perspective on the Allan Variance.
Percival, Donald B
2016-04-01
The origins of the Allan variance trace back 50 years ago to two seminal papers, one by Allan (1966) and the other by Barnes (1966). Since then, the Allan variance has played a leading role in the characterization of high-performance time and frequency standards. Wavelets first arose in the early 1980s in the geophysical literature, and the discrete wavelet transform (DWT) became prominent in the late 1980s in the signal processing literature. Flandrin (1992) briefly documented a connection between the Allan variance and a wavelet transform based upon the Haar wavelet. Percival and Guttorp (1994) noted that one popular estimator of the Allan variance-the maximal overlap estimator-can be interpreted in terms of a version of the DWT now widely referred to as the maximal overlap DWT (MODWT). In particular, when the MODWT is based on the Haar wavelet, the variance of the resulting wavelet coefficients-the wavelet variance-is identical to the Allan variance when the latter is multiplied by one-half. The theory behind the wavelet variance can thus deepen our understanding of the Allan variance. In this paper, we review basic wavelet variance theory with an emphasis on the Haar-based wavelet variance and its connection to the Allan variance. We then note that estimation theory for the wavelet variance offers a means of constructing asymptotically correct confidence intervals (CIs) for the Allan variance without reverting to the common practice of specifying a power-law noise type a priori. We also review recent work on specialized estimators of the wavelet variance that are of interest when some observations are missing (gappy data) or in the presence of contamination (rogue observations or outliers). It is a simple matter to adapt these estimators to become estimators of the Allan variance. Finally we note that wavelet variances based upon wavelets other than the Haar offer interesting generalizations of the Allan variance. PMID:26529757
Estimating the Modified Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, Charles
1995-01-01
A paper at the 1992 FCS showed how to express the modified Allan variance (mvar) in terms of the third difference of the cumulative sum of time residuals. Although this reformulated definition was presented merely as a computational trick for simplifying the calculation of mvar estimates, it has since turned out to be a powerful theoretical tool for deriving the statistical quality of those estimates in terms of their equivalent degrees of freedom (edf), defined for an estimator V by edf V = 2(EV)2/(var V). Confidence intervals for mvar can then be constructed from levels of the appropriate 2 distribution.
NASA Astrophysics Data System (ADS)
Albacete, Javier L.; Kovchegov, Yuri V.; Taliotis, Anastasios
2009-03-01
We calculate the total cross section for the scattering of a quark-anti-quark dipole on a large nucleus at high energy for a strongly coupled N = 4 super Yang-Mills theory using AdS/CFT correspondence. We model the nucleus by a metric of a shock wave in AdS5. We then calculate the expectation value of the Wilson loop (the dipole) by finding the extrema of the Nambu-Goto action for an open string attached to the quark and antiquark lines of the loop in the background of an AdS5 shock wave. We find two physically meaningful extremal string configurations. For both solutions we obtain the forward scattering amplitude N for the quark dipole-nucleus scattering. We study the onset of unitarity with increasing center-of-mass energy and transverse size of the dipole: we observe that for both solutions the saturation scale Qs is independent of energy/Bjorken-x and depends on the atomic number of the nucleus as Qs˜A1/3. Finally we observe that while one of the solutions we found corresponds to the pomeron intercept of αP = 2 found earlier in the literature, when extended to higher energy or larger dipole sizes it violates the black disk limit. The other solution we found respects the black disk limit and yields the pomeron intercept of αP = 1.5. We thus conjecture that the right pomeron intercept in gauge theories at strong coupling may be αP = 1.5.
Practice reduces task relevant variance modulation and forms nominal trajectory
NASA Astrophysics Data System (ADS)
Osu, Rieko; Morishige, Ken-Ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo
2015-12-01
Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary.
Practice reduces task relevant variance modulation and forms nominal trajectory.
Osu, Rieko; Morishige, Ken-ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo
2015-01-01
Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary. PMID:26639942
Practice reduces task relevant variance modulation and forms nominal trajectory
Osu, Rieko; Morishige, Ken-ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo
2015-01-01
Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary. PMID:26639942
Variance decomposition in stochastic simulators.
Le Maître, O P; Knio, O M; Moraes, A
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models. PMID:26133418
Variance analysis. Part I, Extending flexible budget variance analysis to acuity.
Finkler, S A
1991-01-01
The author reviews the concepts of flexible budget variance analysis, including the price, quantity, and volume variances generated by that technique. He also introduces the concept of acuity variance and provides direction on how such a variance measure can be calculated. Part II in this two-part series on variance analysis will look at how personal computers can be useful in the variance analysis process. PMID:1870002
40 CFR 52.2183 - Variance provision.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) South Dakota § 52.2183 Variance provision. The revisions to the variance provisions in Chapter 74:26:01:31.01 of the South Dakota Air...
40 CFR 52.2183 - Variance provision.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) South Dakota § 52.2183 Variance provision. The revisions to the variance provisions in Chapter 74:26:01:31.01 of the South Dakota Air...
40 CFR 52.2183 - Variance provision.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) South Dakota § 52.2183 Variance provision. The revisions to the variance provisions in Chapter 74:26:01:31.01 of the South Dakota Air...
Speed Variance and Its Influence on Accidents.
ERIC Educational Resources Information Center
Garber, Nicholas J.; Gadirau, Ravi
A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Variances. 1010.4 Section 1010.4 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) RADIOLOGICAL HEALTH PERFORMANCE STANDARDS FOR ELECTRONIC PRODUCTS: GENERAL General Provisions § 1010.4 Variances. (a) Criteria for variances. (1) Upon application by...
40 CFR 52.2183 - Variance provision.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 4 2010-07-01 2010-07-01 false Variance provision. 52.2183 Section 52...) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) South Dakota § 52.2183 Variance provision. The revisions to the variance provisions in Chapter 74:26:01:31.01 of the South Dakota Air...
40 CFR 142.41 - Variance request.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Variance request. 142.41 Section 142...) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.41 Variance request. A supplier of water may request the granting of...
10 CFR 851.31 - Variance process.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Variance process. 851.31 Section 851.31 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application..., practices, means, methods, operations, or processes used or proposed to be used by the contractor; and...
10 CFR 851.31 - Variance process.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Variance process. 851.31 Section 851.31 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application..., practices, means, methods, operations, or processes used or proposed to be used by the contractor; and...
10 CFR 851.31 - Variance process.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 4 2013-01-01 2013-01-01 false Variance process. 851.31 Section 851.31 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application..., practices, means, methods, operations, or processes used or proposed to be used by the contractor; and...
10 CFR 851.31 - Variance process.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Variance process. 851.31 Section 851.31 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application..., practices, means, methods, operations, or processes used or proposed to be used by the contractor; and...
40 CFR 52.2183 - Variance provision.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 4 2011-07-01 2011-07-01 false Variance provision. 52.2183 Section 52.2183 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) South Dakota § 52.2183 Variance provision. The revisions to the variance provisions...
Minimum variance beamformer weights revisited.
Moiseev, Alexander; Doesburg, Sam M; Grunau, Ruth E; Ribary, Urs
2015-10-15
Adaptive minimum variance beamformers are widely used analysis tools in MEG and EEG. When the target brain activity presents in the form of spatially localized responses, the procedure usually involves two steps. First, positions and orientations of the sources of interest are determined. Second, the filter weights are calculated and source time courses reconstructed. This last step is the object of the current study. Despite different approaches utilized at the source localization stage, basic expressions for the weights have the same form, dictated by the minimum variance condition. These classic expressions involve covariance matrix of the measured field, which includes contributions from both the sources of interest and the noise background. We show analytically that the same weights can alternatively be obtained, if the full field covariance is replaced with that of the noise, provided the beamformer points to the true sources precisely. In practice, however, a certain mismatch is always inevitable. We show that such mismatch results in partial suppression of the true sources if the traditional weights are used. To avoid this effect, the "alternative" weights based on properly estimated noise covariance should be applied at the second, source time course reconstruction step. We demonstrate mathematically and using simulated and real data that in many situations the alternative weights provide significantly better time course reconstruction quality than the traditional ones. In particular, they a) improve source-level SNR and yield more accurately reconstructed waveforms; b) provide more accurate estimates of inter-source correlations; and c) reduce the adverse influence of the source correlations on the performance of single-source beamformers, which are used most often. Importantly, the alternative weights come at no additional computational cost, as the structure of the expressions remains the same. PMID:26143207
All AdS7 solutions of type II supergravity
NASA Astrophysics Data System (ADS)
Apruzzi, Fabio; Fazzi, Marco; Rosa, Dario; Tomasiello, Alessandro
2014-04-01
In M-theory, the only AdS7 supersymmetric solutions are AdS7 × S 4 and its orbifolds. In this paper, we find and classify new supersymmetric solutions of the type AdS7 × M 3 in type II supergravity. While in IIB none exist, in IIA with Romans mass (which does not lift to M-theory) there are many new ones. We use a pure spinor approach reminiscent of generalized complex geometry. Without the need for any Ansatz, the system determines uniquely the form of the metric and fluxes, up to solving a system of ODEs. Namely, the metric on M 3 is that of an S 2 fibered over an interval; this is consistent with the Sp(1) R-symmetry of the holographically dual (1,0) theory. By including D8 brane sources, one can numerically obtain regular solutions, where topologically M 3 ≅ S 3.
Albacete, Javier L.; Kovchegov, Yuri V.; Taliotis, Anastasios
2009-03-23
We calculate the total cross section for the scattering of a quark-anti-quark dipole on a large nucleus at high energy for a strongly coupled N = 4 super Yang-Mills theory using AdS/CFT correspondence. We model the nucleus by a metric of a shock wave in AdS{sub 5}. We then calculate the expectation value of the Wilson loop (the dipole) by finding the extrema of the Nambu-Goto action for an open string attached to the quark and antiquark lines of the loop in the background of an AdS{sub 5} shock wave. We find two physically meaningful extremal string configurations. For both solutions we obtain the forward scattering amplitude N for the quark dipole-nucleus scattering. We study the onset of unitarity with increasing center-of-mass energy and transverse size of the dipole: we observe that for both solutions the saturation scale Q{sub s} is independent of energy/Bjorken-x and depends on the atomic number of the nucleus as Q{sub s}{approx}A{sup 1/3}. Finally we observe that while one of the solutions we found corresponds to the pomeron intercept of {alpha}{sub P} = 2 found earlier in the literature, when extended to higher energy or larger dipole sizes it violates the black disk limit. The other solution we found respects the black disk limit and yields the pomeron intercept of {alpha}{sub P} = 1.5. We thus conjecture that the right pomeron intercept in gauge theories at strong coupling may be {alpha}{sub P} = 1.5.
Alzheimer's disease: Unique markers for diagnosis & new treatment modalities
Aggarwal, Neelum T.; Shah, Raj C.; Bennett, David A.
2015-01-01
Alzheimer's disease (AD) is a progressive and irreversible neurodegenerative disease. In humans, AD becomes symptomatic only after brain changes occur over years or decades. Three contiguous phases of AD have been proposed: (i) the AD pathophysiologic process, (ii) mild cognitive impairment due to AD, and (iii) AD dementia. Intensive research continues around the world on unique diagnostic markers and interventions associated with each phase of AD. In this review, we summarize the available evidence and new therapeutic approaches that target both amyloid and tau pathology in AD and discuss the biomarkers and pharmaceutical interventions available and in development for each AD phase. PMID:26609028
NASA Astrophysics Data System (ADS)
Martelli, Dario; Morales, Jose F.
2005-02-01
In the light of the recent Lin, Lunin, Maldacena (LLM) results, we investigate 1/2-BPS geometries in minimal (and next to minimal) supergravity in D = 6 dimensions. In the case of minimal supergravity, solutions are given by fibrations of a two-torus T2 specified by two harmonic functions. For a rectangular torus the two functions are related by a non-linear equation with rare solutions: AdS3 × S3, the pp-wave and the multi-center string. ``Bubbling'', i.e. superpositions of droplets, is accommodated by allowing the complex structure of the T2 to vary over the base. The analysis is repeated in the presence of a tensor multiplet and similar conclusions are reached, with generic solutions describing D1D5 (or their dual fundamental string-momentum) systems. In this framework, the profile of the dual fundamental string-momentum system is identified with the boundaries of the droplets in a two-dimensional plane.
Creativity and technical innovation: spatial ability's unique role.
Kell, Harrison J; Lubinski, David; Benbow, Camilla P; Steiger, James H
2013-09-01
In the late 1970s, 563 intellectually talented 13-year-olds (identified by the SAT as in the top 0.5% of ability) were assessed on spatial ability. More than 30 years later, the present study evaluated whether spatial ability provided incremental validity (beyond the SAT's mathematical and verbal reasoning subtests) for differentially predicting which of these individuals had patents and three classes of refereed publications. A two-step discriminant-function analysis revealed that the SAT subtests jointly accounted for 10.8% of the variance among these outcomes (p < .01); when spatial ability was added, an additional 7.6% was accounted for--a statistically significant increase (p < .01). The findings indicate that spatial ability has a unique role in the development of creativity, beyond the roles played by the abilities traditionally measured in educational selection, counseling, and industrial-organizational psychology. Spatial ability plays a key and unique role in structuring many important psychological phenomena and should be examined more broadly across the applied and basic psychological sciences. PMID:23846718
Enhancing area of review capabilities: Implementing a variance program
De Leon, F.
1995-12-01
The Railroad Commission of Texas (RRC) has regulated oil-field injection well operations since issuing its first injection permit in 1938. The Environmental Protection Agency (EPA) granted the RRC primary enforcement responsibility for the Class H Underground Injection Control (UIC) Program in April 1982. At that time, the added level of groundwater protection afforded by an Area of Review (AOR) on previously permitted Class H wells was not deemed necessary or cost effective. A proposed EPA rule change will require AORs to be performed on all pre-primacy Class II wells unless a variance can be justified. A variance methodology has been developed by researchers at the University of Missouri-Rolla in conjunction with the American Petroleum Institute (API). This paper will outline the RRC approach to implementing the AOR variance methodology. The RRC`s UIC program tracks 49,256 pre-primacy wells. Approximately 25,598 of these wells have active permits and will be subject to the proposed AOR requirements. The potential workload of performing AORs or granting variances for this many wells makes the development of a Geographic Information System (GIS) imperative. The RRC has recently completed a digitized map of the entire state and has spotted 890,000 of an estimated 1.2 million wells. Integrating this digital state map into a GIS will allow the RRC to tie its many data systems together. Once in place, this integrated data system will be used to evaluate AOR variances for pre-primacy wells on a field-wide basis. It will also reduce the regulatory cost of permitting by allowing the RRC staff to perform AORs or grant variances for the approximately 3,000 new and amended permit applications requiring AORs each year.
Using variances to comply with resource conservation and recovery act treatment standards.
Ranek, N. L.; Environmental Assessment
2002-10-01
When a waste generated, treated, or disposed of at a site in the United States is classified as hazardous under the Resource Conservation and Recovery Act and is destined for land disposal, the waste manager responsible for that site must select an approach to comply with land disposal restrictions (LDR) treatment standards. This paper focuses on the approach of obtaining a variance from existing, applicable LDR treatment standards. It describes the types of available variances, which include (1) determination of equivalent treatment (DET); (2) treatability variance; and (3) treatment variance for contaminated soil. The process for obtaining each type of variance is also described. Data are presented showing that historically the U.S. Environmental Protection Agency (EPA) processed DET petitions within one year of their date of submission. However, a 1999 EPA policy change added public participation to the DET petition review, which may lengthen processing time in the future. Regarding site-specific treatability variances, data are presented showing an EPA processing time of between 10 and 16 months. Only one generically applicable treatability variance has been granted, which took 30 months to process. No treatment variances for contaminated soil, which were added to the federal LDR program in 1998, are identified as having been granted.
Infinite variance in fermion quantum Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.
Simulation testing of unbiasedness of variance estimators
Link, W.A.
1993-01-01
In this article I address the evaluation of estimators of variance for parameter estimates. Given an unbiased estimator X of a parameter 0, and an estimator V of the variance of X, how does one test (via simulation) whether V is an unbiased estimator of the variance of X? The derivation of the test statistic illustrates the need for care in substituting consistent estimators for unknown parameters.
Code of Federal Regulations, 2014 CFR
2014-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2010 CFR
2010-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2011 CFR
2011-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2013 CFR
2013-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
Code of Federal Regulations, 2012 CFR
2012-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...
NASA Technical Reports Server (NTRS)
Lakshminarayanan, M. Y.; Gunst, R. F.
1984-01-01
Maximum likelihood estimation of parameters in linear structural relationships under normality assumptions requires knowledge of one or more of the model parameters if no replication is available. The most common assumption added to the model definition is that the ratio of the error variances of the response and predictor variates is known. The use of asymptotic formulae for variances and mean squared errors as a function of sample size and the assumed value for the error variance ratio is investigated.
NASA Technical Reports Server (NTRS)
Lakshminarayanan, M. Y.; Gunst, R. F.
1984-01-01
Maximum likelihood estimation of parameters in linear structural relationships under normality assumptions requires knowledge of one or more of the model parameters if no replication is available. The most common assumption added to the model definition is that the ratio of the error variances of the response and predictor variates is known. This paper investigates the use of asymptotic formulae for variances and mean squared errors as a function of sample size and the assumed value for the error variance ratio.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Variances. 654.402 Section 654.402 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR SPECIAL RESPONSIBILITIES OF THE EMPLOYMENT SERVICE SYSTEM Housing for Agricultural Workers Purpose and Applicability § 654.402 Variances. (a) An employer may apply for a...
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Variances. 654.402 Section 654.402 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR SPECIAL RESPONSIBILITIES OF THE EMPLOYMENT SERVICE SYSTEM Housing for Agricultural Workers Purpose and Applicability § 654.402 Variances....
40 CFR 142.41 - Variance request.
Code of Federal Regulations, 2012 CFR
2012-07-01
... nature and duration of variance requested. (b) Relevant analytical results of water quality sampling of... relevant to ability to comply. (3) Analytical results of raw water quality relevant to the variance request... request made under § 142.40(b), a statement that the system will perform monitoring and other...
40 CFR 142.41 - Variance request.
Code of Federal Regulations, 2013 CFR
2013-07-01
... nature and duration of variance requested. (b) Relevant analytical results of water quality sampling of... relevant to ability to comply. (3) Analytical results of raw water quality relevant to the variance request... request made under § 142.40(b), a statement that the system will perform monitoring and other...
40 CFR 142.41 - Variance request.
Code of Federal Regulations, 2014 CFR
2014-07-01
... nature and duration of variance requested. (b) Relevant analytical results of water quality sampling of... relevant to ability to comply. (3) Analytical results of raw water quality relevant to the variance request... request made under § 142.40(b), a statement that the system will perform monitoring and other...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Variances. 1010.4 Section 1010.4 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) RADIOLOGICAL HEALTH... and Radiological Health, Food and Drug Administration, may grant a variance from one or...
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Variances. 1010.4 Section 1010.4 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) RADIOLOGICAL HEALTH... and Radiological Health, Food and Drug Administration, may grant a variance from one or...
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Variances. 1010.4 Section 1010.4 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) RADIOLOGICAL HEALTH... and Radiological Health, Food and Drug Administration, may grant a variance from one or...
On Some Representations of Sample Variance
ERIC Educational Resources Information Center
Joarder, Anwar H.
2002-01-01
The usual formula for variance depending on rounding off the sample mean lacks precision, especially when computer programs are used for the calculation. The well-known simplification of the total sums of squares does not always give benefit. Since the variance of two observations is easily calculated without the use of a sample mean, and the…
Code of Federal Regulations, 2010 CFR
2010-01-01
... Procedures § 1021.343 Variances. (a) Emergency actions. DOE may take an action without observing all provisions of this part or the CEQ Regulations, in accordance with 40 CFR 1506.11, in emergency situations... 10 Energy 4 2010-01-01 2010-01-01 false Variances. 1021.343 Section 1021.343 Energy DEPARTMENT...
Code of Federal Regulations, 2010 CFR
2010-04-01
... 18 Conservation of Power and Water Resources 2 2010-04-01 2010-04-01 false Variances. 1304.408 Section 1304.408 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY APPROVAL OF... § 1304.408 Variances. The Vice President or the designee thereof is authorized, following...
Nonlinear Epigenetic Variance: Review and Simulations
ERIC Educational Resources Information Center
Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.
2010-01-01
We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…
Portfolio optimization with mean-variance model
NASA Astrophysics Data System (ADS)
Hoe, Lam Weng; Siew, Lam Weng
2016-06-01
Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.
Component Processes in Reading: Shared and Unique Variance in Serial and Isolated Naming Speed
ERIC Educational Resources Information Center
Logan, Jessica A. R.; Schatschneider, Christopher
2014-01-01
Reading ability is comprised of several component processes. In particular, the connection between the visual and verbal systems has been demonstrated to play an important role in the reading process. The present study provides a review of the existing literature on the visual verbal connection as measured by two tasks, rapid serial naming and…
ERIC Educational Resources Information Center
Brotheridge, Celeste M.; Power, Jacqueline L.
2008-01-01
Purpose: This study seeks to examine the extent to which the use of career center services results in the significant incremental prediction of career outcomes beyond its established predictors. Design/methodology/approach: The authors survey the clients of a public agency's career center and use hierarchical multiple regressions in order to…
Portfolio optimization using median-variance approach
NASA Astrophysics Data System (ADS)
Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli
2013-04-01
Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.
Code of Federal Regulations, 2011 CFR
2011-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2014 CFR
2014-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2010 CFR
2010-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2013 CFR
2013-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Code of Federal Regulations, 2012 CFR
2012-07-01
... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...
Reducing variance in batch partitioning measurements
Mariner, Paul E.
2010-08-11
The partitioning experiment is commonly performed with little or no attention to reducing measurement variance. Batch test procedures such as those used to measure K{sub d} values (e.g., ASTM D 4646 and EPA402 -R-99-004A) do not explain how to evaluate measurement uncertainty nor how to minimize measurement variance. In fact, ASTM D 4646 prescribes a sorbent:water ratio that prevents variance minimization. Consequently, the variance of a set of partitioning measurements can be extreme and even absurd. Such data sets, which are commonplace, hamper probabilistic modeling efforts. An error-savvy design requires adjustment of the solution:sorbent ratio so that approximately half of the sorbate partitions to the sorbent. Results of Monte Carlo simulations indicate that this simple step can markedly improve the precision and statistical characterization of partitioning uncertainty.
Variance anisotropy in compressible 3-D MHD
NASA Astrophysics Data System (ADS)
Oughton, S.; Matthaeus, W. H.; Wan, Minping; Parashar, Tulasi
2016-06-01
We employ spectral method numerical simulations to examine the dynamical development of anisotropy of the variance, or polarization, of the magnetic and velocity field in compressible magnetohydrodynamic (MHD) turbulence. Both variance anisotropy and spectral anisotropy emerge under influence of a large-scale mean magnetic field B0; these are distinct effects, although sometimes related. Here we examine the appearance of variance parallel to B0, when starting from a highly anisotropic state. The discussion is based on a turbulence theoretic approach rather than a wave perspective. We find that parallel variance emerges over several characteristic nonlinear times, often attaining a quasi-steady level that depends on plasma beta. Consistency with solar wind observations seems to occur when the initial state is dominated by quasi-two-dimensional fluctuations.
Another Line for the Analysis of Variance
ERIC Educational Resources Information Center
Brown, Bruce L.; Harshbarger, Thad R.
1976-01-01
A test is developed for hypotheses about the grand mean in the analysis of variance, using the known relationship between the t distribution and the F distribution with 1 df (degree of freedom) for the numerator. (Author/RC)
Nonorthogonal Analysis of Variance Programs: An Evaluation.
ERIC Educational Resources Information Center
Hosking, James D.; Hamer, Robert M.
1979-01-01
Six computer programs for four methods of nonorthogonal analysis of variance are compared for capabilities, accuracy, cost, transportability, quality of documentation, associated computational capabilities, and ease of use: OSIRIS; SAS; SPSS; MANOVA; BMDP2V; and MULTIVARIANCE. (CTM)
Variational bayesian method of estimating variance components.
Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi
2016-07-01
We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling. PMID:26877207
NASA Astrophysics Data System (ADS)
Costa, Miguel S.; Greenspan, Lauren; Oliveira, Miguel; Penedones, João; Santos, Jorge E.
2016-06-01
We consider solutions in Einstein-Maxwell theory with a negative cosmological constant that asymptote to global AdS 4 with conformal boundary {S}2× {{{R}}}t. At the sphere at infinity we turn on a space-dependent electrostatic potential, which does not destroy the asymptotic AdS behaviour. For simplicity we focus on the case of a dipolar electrostatic potential. We find two new geometries: (i) an AdS soliton that includes the full backreaction of the electric field on the AdS geometry; (ii) a polarised neutral black hole that is deformed by the electric field, accumulating opposite charges in each hemisphere. For both geometries we study boundary data such as the charge density and the stress tensor. For the black hole we also study the horizon charge density and area, and further verify a Smarr formula. Then we consider this system at finite temperature and compute the Gibbs free energy for both AdS soliton and black hole phases. The corresponding phase diagram generalizes the Hawking-Page phase transition. The AdS soliton dominates the low temperature phase and the black hole the high temperature phase, with a critical temperature that decreases as the external electric field increases. Finally, we consider the simple case of a free charged scalar field on {S}2× {{{R}}}t with conformal coupling. For a field in the SU(N ) adjoint representation we compare the phase diagram with the above gravitational system.
GR uniqueness and deformations
NASA Astrophysics Data System (ADS)
Krasnov, Kirill
2015-10-01
In the metric formulation gravitons are described with the parity symmetric S + 2 ⊗ S - 2 representation of Lorentz group. General Relativity is then the unique theory of interacting gravitons with second order field equations. We show that if a chiral S + 3 ⊗ S - representation is used instead, the uniqueness is lost, and there is an infinite-parametric family of theories of interacting gravitons with second order field equations. We use the language of graviton scattering amplitudes, and show how the uniqueness of GR is avoided using simple dimensional analysis. The resulting distinct from GR gravity theories are all parity asymmetric, but share the GR MHV amplitudes. They have new all same helicity graviton scattering amplitudes at every graviton order. The amplitudes with at least one graviton of opposite helicity continue to be determinable by the BCFW recursion.
Schumpe, Birga Mareen; Erb, Hans-Peter
2015-01-01
A defining force in the shaping of human identity is a person's need to feel special and different from others. Psychologists term this motivation Need for Uniqueness (NfU). There are manifold ways to establish feelings of uniqueness, e.g., by showing unusual consumption behaviour or by not conforming to majority views. The NfU can be seen as a stable personality trait, that is, individuals differ in their dispositional need to feel unique. The NfU is also influenced by situational factors and social environments. The cultural context is one important social setting shaping the NfU. This article aims to illuminate the NfU from a social psychological perspective. PMID:25942772
Discrimination of frequency variance for tonal sequencesa)
Byrne, Andrew J.; Viemeister, Neal F.; Stellmack, Mark A.
2014-01-01
Real-world auditory stimuli are highly variable across occurrences and sources. The present study examined the sensitivity of human listeners to differences in global stimulus variability. In a two-interval, forced-choice task, variance discrimination was measured using sequences of five 100-ms tone pulses. The frequency of each pulse was sampled randomly from a distribution that was Gaussian in logarithmic frequency. In the non-signal interval, the sampled distribution had a variance of σSTAN2, while in the signal interval, the variance of the sequence was σSIG2 (with σSIG2 > σSTAN2). The listener's task was to choose the interval with the larger variance. To constrain possible decision strategies, the mean frequency of the sampling distribution of each interval was randomly chosen for each presentation. Psychometric functions were measured for various values of σSTAN2. Although the performance was remarkably similar across listeners, overall performance was poorer than that of an ideal observer (IO) which perfectly compares interval variances. However, like the IO, Weber's Law behavior was observed, with a constant ratio of (σSIG2-σSTAN2) to σSTAN2 yielding similar performance. A model which degraded the IO with a frequency-resolution noise and a computational noise provided a reasonable fit to the real data. PMID:25480064
Relational mate value: consensus and uniqueness in romantic evaluations.
Eastwick, Paul W; Hunt, Lucy L
2014-05-01
Classic evolutionary and social exchange perspectives suggest that some people have more mate value than others because they possess desirable traits (e.g., attractiveness, status) that are intrinsic to the individual. This article broadens mate value in 2 ways to incorporate relational perspectives. First, close relationships research suggests an alternative measure of mate value: whether someone can provide a high quality relationship. Second, person perception research suggests that both trait-based and relationship quality measures of mate value should contain a mixture of target variance (i.e., consensus about targets, the classic conceptualization) and relationship variance (i.e., unique ratings of targets). In Study 1, participants described their personal conceptions of mate value and revealed themes consistent with classic and relational approaches. Study 2 used a social relations model blocked design to assess target and relationship variances in participants' romantic evaluations of opposite-sex classmates at the beginning and end of the semester. In Study 3, a one-with-many design documented target and relationship variances among long-term opposite-sex acquaintances. Results generally revealed more relationship variance than target variance; participants' romantic evaluations were more likely to be unique to a particular person rather than consensual. Furthermore, the relative dominance of relationship to target variance was stronger for relational measures of mate value (i.e., relationship quality projections) than classic trait-based measures (i.e., attractiveness, resources). Finally, consensus decreased as participants got to know one another better, and long-term acquaintances in Study 3 revealed enormous amounts of relationship variance. Implications for the evolutionary, close relationships, and person-perception literatures are discussed. PMID:24611897
Retief, François Pieter; Cilliers, Louise
2011-09-01
Akhenaten was a unique pharaoh in more ways than one. He initiated a major socio-religious revolution that had vast consequences for his country, and possessed a strikingly abnormal physiognomy that was of note in his time and has interested historians up to the present era. In this study, we attempt to identify the developmental disorder responsible for his eunuchoid appearance. PMID:21920162
ERIC Educational Resources Information Center
Goble, Don
2009-01-01
This article describes the many learning opportunities that broadcast technology students at Ladue Horton Watkins High School in St. Louis, Missouri, experience because of their unique access to technology and methods of learning. Through scaffolding, stepladder techniques, and trial by fire, students learn to produce multiple television programs,…
Cross-bispectrum computation and variance estimation
NASA Technical Reports Server (NTRS)
Lii, K. S.; Helland, K. N.
1981-01-01
A method for the estimation of cross-bispectra of discrete real time series is developed. The asymptotic variance properties of the bispectrum are reviewed, and a method for the direct estimation of bispectral variance is given. The symmetry properties are described which minimize the computations necessary to obtain a complete estimate of the cross-bispectrum in the right-half-plane. A procedure is given for computing the cross-bispectrum by subdividing the domain into rectangular averaging regions which help reduce the variance of the estimates and allow easy application of the symmetry relationships to minimize the computational effort. As an example of the procedure, the cross-bispectrum of a numerically generated, exponentially distributed time series is computed and compared with theory.
Inhomogeneity-induced variance of cosmological parameters
NASA Astrophysics Data System (ADS)
Wiegand, A.; Schwarz, D. J.
2012-02-01
Context. Modern cosmology relies on the assumption of large-scale isotropy and homogeneity of the Universe. However, locally the Universe is inhomogeneous and anisotropic. This raises the question of how local measurements (at the ~102 Mpc scale) can be used to determine the global cosmological parameters (defined at the ~104 Mpc scale)? Aims: We connect the questions of cosmological backreaction, cosmic averaging and the estimation of cosmological parameters and show how they relate to the problem of cosmic variance. Methods: We used Buchert's averaging formalism and determined a set of locally averaged cosmological parameters in the context of the flat Λ cold dark matter model. We calculated their ensemble means (i.e. their global value) and variances (i.e. their cosmic variance). We applied our results to typical survey geometries and focused on the study of the effects of local fluctuations of the curvature parameter. Results: We show that in the context of standard cosmology at large scales (larger than the homogeneity scale and in the linear regime), the question of cosmological backreaction and averaging can be reformulated as the question of cosmic variance. The cosmic variance is found to be highest in the curvature parameter. We propose to use the observed variance of cosmological parameters to measure the growth factor. Conclusions: Cosmological backreaction and averaging are real effects that have been measured already for a long time, e.g. by the fluctuations of the matter density contrast averaged over spheres of a certain radius. Backreaction and averaging effects from scales in the linear regime, as considered in this work, are shown to be important for the precise measurement of cosmological parameters.
Integrating Variances into an Analytical Database
NASA Technical Reports Server (NTRS)
Sanchez, Carlos
2010-01-01
For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.
ERIC Educational Resources Information Center
Yetkiner, Zeynep Ebrar
2009-01-01
Commonality analysis is a method of partitioning variance to determine the predictive ability unique to each predictor (or predictor set) and common to two or more of the predictors (or predictor sets). The purposes of the present paper are to (a) explain commonality analysis in a multiple regression context as an alternative for middle grades…
Wave propagation analysis using the variance matrix.
Sharma, Richa; Ivan, J Solomon; Narayanamurthy, C S
2014-10-01
The propagation of a coherent laser wave-field through a pseudo-random phase plate is studied using the variance matrix estimated from Shack-Hartmann wavefront sensor data. The uncertainty principle is used as a tool in discriminating the data obtained from the Shack-Hartmann wavefront sensor. Quantities of physical interest such as the twist parameter, and the symplectic eigenvalues, are estimated from the wavefront sensor measurements. A distance measure between two variance matrices is introduced and used to estimate the spatial asymmetry of a wave-field in the experiment. The estimated quantities are then used to compare a distorted wave-field with its undistorted counterpart. PMID:25401243
Variance in binary stellar population synthesis
NASA Astrophysics Data System (ADS)
Breivik, Katelyn; Larson, Shane L.
2016-03-01
In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.
Decomposition of Variance for Spatial Cox Processes
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
2012-01-01
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees. PMID:23599558
Variance Reduction Using Nonreversible Langevin Samplers
NASA Astrophysics Data System (ADS)
Duncan, A. B.; Lelièvre, T.; Pavliotis, G. A.
2016-05-01
A standard approach to computing expectations with respect to a given target measure is to introduce an overdamped Langevin equation which is reversible with respect to the target distribution, and to approximate the expectation by a time-averaging estimator. As has been noted in recent papers [30, 37, 61, 72], introducing an appropriately chosen nonreversible component to the dynamics is beneficial, both in terms of reducing the asymptotic variance and of speeding up convergence to the target distribution. In this paper we present a detailed study of the dependence of the asymptotic variance on the deviation from reversibility. Our theoretical findings are supported by numerical simulations.
A Simple Algorithm for Approximating Confidence on the Modified Allan Variance and the Time Variance
NASA Technical Reports Server (NTRS)
Weiss, Marc A.; Greenhall, Charles A.
1996-01-01
An approximating algorithm for computing equvalent degrees of freedom of the Modified Allan Variance and its square root, the Modified Allan Deviation (MVAR and MDEV), and the Time Variance and Time Deviation (TVAR and TDEV) is presented, along with an algorithm for approximating the inverse chi-square distribution.
Testing Variances in Psychological and Educational Research.
ERIC Educational Resources Information Center
Ramsey, Philip H.
1994-01-01
A review of the literature indicates that the two best procedures for testing variances are one that was proposed by O'Brien (1981) and another that was proposed by Brown and Forsythe (1974). An examination of these procedures for a variety of populations confirms their robustness and indicates how optimal power can usually be obtained. (SLD)
Code of Federal Regulations, 2010 CFR
2010-01-01
... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2013 CFR
2013-01-01
... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2014 CFR
2014-01-01
... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2012 CFR
2012-01-01
... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Code of Federal Regulations, 2011 CFR
2011-01-01
... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...
Variance Reduction for a Discrete Velocity Gas
NASA Astrophysics Data System (ADS)
Morris, A. B.; Varghese, P. L.; Goldstein, D. B.
2011-05-01
We extend a variance reduction technique developed by Baker and Hadjiconstantinou [1] to a discrete velocity gas. In our previous work, the collision integral was evaluated by importance sampling of collision partners [2]. Significant computational effort may be wasted by evaluating the collision integral in regions where the flow is in equilibrium. In the current approach, substantial computational savings are obtained by only solving for the deviations from equilibrium. In the near continuum regime, the deviations from equilibrium are small and low noise evaluation of the collision integral can be achieved with very coarse statistical sampling. Spatially homogenous relaxation of the Bobylev-Krook-Wu distribution [3,4], was used as a test case to verify that the method predicts the correct evolution of a highly non-equilibrium distribution to equilibrium. When variance reduction is not used, the noise causes the entropy to undershoot, but the method with variance reduction matches the analytic curve for the same number of collisions. We then extend the work to travelling shock waves and compare the accuracy and computational savings of the variance reduction method to DSMC over Mach numbers ranging from 1.2 to 10.
Multiple Comparison Procedures when Population Variances Differ.
ERIC Educational Resources Information Center
Olejnik, Stephen; Lee, JaeShin
A review of the literature on multiple comparison procedures suggests several alternative approaches for comparing means when population variances differ. These include: (1) the approach of P. A. Games and J. F. Howell (1976); (2) C. W. Dunnett's C confidence interval (1980); and (3) Dunnett's T3 solution (1980). These procedures control the…
Variance Anisotropy of Solar Wind fluctuations
NASA Astrophysics Data System (ADS)
Oughton, S.; Matthaeus, W. H.; Wan, M.; Osman, K.
2013-12-01
Solar wind observations at MHD scales indicate that the energy associated with velocity and magnetic field fluctuations transverse to the mean magnetic field is typically much larger than that associated with parallel fluctuations [eg, 1]. This is often referred to as variance anisotropy. Various explanations for it have been suggested, including that the fluctuations are predominantly shear Alfven waves [1] and that turbulent dynamics leads to such states [eg, 2]. Here we investigate the origin and strength of such variance anisotropies, using spectral method simulations of the compressible (polytropic) 3D MHD equations. We report on results from runs with initial conditions that are either (i) broadband turbulence or (ii) fluctuations polarized in the same sense as shear Alfven waves. The dependence of the variance anisotropy on the plasma beta and Mach number is examined [3], along with the timescale for any variance anisotropy to develop. Implications for solar wind fluctuations will be discussed. References: [1] Belcher, J. W. and Davis Jr., L. (1971), J. Geophys. Res., 76, 3534. [2] Matthaeus, W. H., Ghosh, S., Oughton, S. and Roberts, D. A. (1996), J. Geophys. Res., 101, 7619. [3] Smith, C. W., B. J. Vasquez and K. Hamilton (2006), J. Geophys. Res., 111, A09111.
Comparing the Variances of Two Dependent Groups.
ERIC Educational Resources Information Center
Wilcox, Rand R.
1990-01-01
Recently, C. E. McCulloch (1987) suggested a modification of the Morgan-Pitman test for comparing the variances of two dependent groups. This paper demonstrates that there are situations where the procedure is not robust. A subsample approach, similar to the Box-Scheffe test, and the Sandvik-Olsson procedure are also assessed. (TJH)
Formative Use of Intuitive Analysis of Variance
ERIC Educational Resources Information Center
Trumpower, David L.
2013-01-01
Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In…
78 FR 14122 - Revocation of Permanent Variances
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-04
... OSHA's scaffolds standards for construction (77 FR 46948). Today's notice revoking the variances takes... Safety and Health Act of 1970 (OSH Act; 29 U.S.C. 651, 655) in 1971 (see 36 FR 7340). Paragraphs (a)(4..., construction, and use of scaffolds (61 FR 46026). In the preamble to the final rule, OSHA stated that it...
7 CFR 205.290 - Temporary variances.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Temporary variances. 205.290 Section 205.290 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) ORGANIC FOODS PRODUCTION ACT PROVISIONS NATIONAL ORGANIC PROGRAM...
Code of Federal Regulations, 2012 CFR
2012-04-01
... 18 Conservation of Power and Water Resources 2 2012-04-01 2012-04-01 false Variances. 1304.408 Section 1304.408 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY APPROVAL OF CONSTRUCTION IN THE TENNESSEE RIVER SYSTEM AND REGULATION OF STRUCTURES AND OTHER ALTERATIONS...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 18 Conservation of Power and Water Resources 2 2011-04-01 2011-04-01 false Variances. 1304.408 Section 1304.408 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY APPROVAL OF CONSTRUCTION IN THE TENNESSEE RIVER SYSTEM AND REGULATION OF STRUCTURES AND OTHER ALTERATIONS...
Smeared antibranes polarise in AdS
NASA Astrophysics Data System (ADS)
Gautason, Fridrik Freyr; Truijen, Brecht; Van Riet, Thomas
2015-07-01
In the recent literature it has been questioned whether the local backreaction of antibranes in flux throats can induce a perturbative brane-flux decay. Most evidence for this can be gathered for D6 branes and D p branes smeared over 6 - p compact directions, in line with the absence of finite temperature solutions for these cases. The solutions in the literature have flat worldvolume geometries and non-compact transversal spaces. In this paper we consider what happens when the worldvolume is AdS and the transversal space is compact. We show that in these circumstances brane polarisation smoothens out the flux singularity, which is an indication that brane-flux decay is prevented. This is consistent with the fact that the cosmological constant would be less negative after brane-flux decay. Our results extend recent results on AdS7 solutions from D6 branes to AdS p+1 solutions from D p branes. We show that supersymmetry of the AdS solutions depend on p non-trivially.
AdS orbifolds and Penrose limits
Alishahiha, Mohsen; Sheikh-Jabbari, Mohammad M.; Tatar, Radu
2002-12-09
In this paper we study the Penrose limit of AdS{sub 5} orbifolds. The orbifold can be either in the pure spatial directions or space and time directions. For the AdS{sub 5}/{Lambda} x S{sup 5} spatial orbifold we observe that after the Penrose limit we obtain the same result as the Penrose limit of AdS{sub 5} x S{sup 5}/{Lambda}. We identify the corresponding BMN operators in terms of operators of the gauge theory on R x S{sup 3}/{Lambda}. The semi-classical description of rotating strings in these backgrounds have also been studied. For the spatial AdS orbifold we show that in the quadratic order the obtained action for the fluctuations is the same as that in S{sup 5} orbifold, however, the higher loop correction can distinguish between two cases.
R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization
Dazard, Jean-Eudes; Xu, Hua; Rao, J. Sunil
2015-01-01
We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets (p ≫ n paradigm), such as in ‘omics’-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real ‘omics’ test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR (‘Mean-Variance Regularization’), downloadable from the CRAN. PMID:26819572
Cosmic variance in inflation with two light scalars
NASA Astrophysics Data System (ADS)
Bonga, Béatrice; Brahma, Suddhasattwa; Deutsch, Anne-Sylvie; Shandera, Sarah
2016-05-01
We examine the squeezed limit of the bispectrum when a light scalar with arbitrary non-derivative self-interactions is coupled to the inflaton. We find that when the hidden sector scalar is sufficiently light (m lesssim 0.1 H), the coupling between long and short wavelength modes from the series of higher order correlation functions (from arbitrary order contact diagrams) causes the statistics of the fluctuations to vary in sub-volumes. This means that observations of primordial non-Gaussianity cannot be used to uniquely reconstruct the potential of the hidden field. However, the local bispectrum induced by mode-coupling from these diagrams always has the same squeezed limit, so the field's locally determined mass is not affected by this cosmic variance.
Abel, David L.
2011-01-01
Is life physicochemically unique? No. Is life unique? Yes. Life manifests innumerable formalisms that cannot be generated or explained by physicodynamics alone. Life pursues thousands of biofunctional goals, not the least of which is staying alive. Neither physicodynamics, nor evolution, pursue goals. Life is largely directed by linear digital programming and by the Prescriptive Information (PI) instantiated particularly into physicodynamically indeterminate nucleotide sequencing. Epigenomic controls only compound the sophistication of these formalisms. Life employs representationalism through the use of symbol systems. Life manifests autonomy, homeostasis far from equilibrium in the harshest of environments, positive and negative feedback mechanisms, prevention and correction of its own errors, and organization of its components into Sustained Functional Systems (SFS). Chance and necessity—heat agitation and the cause-and-effect determinism of nature’s orderliness—cannot spawn formalisms such as mathematics, language, symbol systems, coding, decoding, logic, organization (not to be confused with mere self-ordering), integration of circuits, computational success, and the pursuit of functionality. All of these characteristics of life are formal, not physical. PMID:25382119
42 CFR 456.525 - Request for renewal of variance.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 4 2010-10-01 2010-10-01 false Request for renewal of variance. 456.525 Section..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time Requirements § 456.525 Request for renewal of variance. (a) The agency must submit a request for renewal of...
10 CFR 851.32 - Action on variance requests.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Action on variance requests. 851.32 Section 851.32 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.32 Action on variance requests. (a... approval of a variance application, the Chief Health, Safety and Security Officer must forward to the...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Variances. 50-204.1a... and Application § 50-204.1a Variances. (a) Variances from standards in this part may be granted in the same circumstances in which variances may be granted under sections 6(b)(6)(A) or 6(d) of the...
21 CFR 898.14 - Exemptions and variances.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Exemptions and variances. 898.14 Section 898.14... variances. (a) A request for an exemption or variance shall be submitted in the form of a petition under... with the device; and (4) Other information justifying the exemption or variance. (b) An exemption...
10 CFR 851.30 - Consideration of variances.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Consideration of variances. 851.30 Section 851.30 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.30 Consideration of variances. (a) Variances shall be granted by the Under Secretary after considering the recommendation of the Chief...
42 CFR 456.521 - Conditions for granting variance requests.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 4 2010-10-01 2010-10-01 false Conditions for granting variance requests. 456.521..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time Requirements § 456.521 Conditions for granting variance requests. (a) Except as described under paragraph...
77 FR 40735 - Unique Device Identification System
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-10
...The Food and Drug Administration (FDA) is proposing to establish a unique device identification system to implement the requirement added to the Federal Food, Drug, and Cosmetic Act (FD&C Act) by section 226 of the Food and Drug Administration Amendments Act of 2007 (FDAAA), Section 226 of FDAAA amended the FD&C Act to add new section 519(f), which directs FDA to promulgate regulations......
Mixed emotions: Sensitivity to facial variance in a crowd of faces.
Haberman, Jason; Lee, Pegan; Whitney, David
2015-01-01
The visual system automatically represents summary information from crowds of faces, such as the average expression. This is a useful heuristic insofar as it provides critical information about the state of the world, not simply information about the state of one individual. However, the average alone is not sufficient for making decisions about how to respond to a crowd. The variance or heterogeneity of the crowd--the mixture of emotions--conveys information about the reliability of the average, essential for determining whether the average can be trusted. Despite its importance, the representation of variance within a crowd of faces has yet to be examined. This is addressed here in three experiments. In the first experiment, observers viewed a sample set of faces that varied in emotion, and then adjusted a subsequent set to match the variance of the sample set. To isolate variance as the summary statistic of interest, the average emotion of both sets was random. Results suggested that observers had information regarding crowd variance. The second experiment verified that this was indeed a uniquely high-level phenomenon, as observers were unable to derive the variance of an inverted set of faces as precisely as an upright set of faces. The third experiment replicated and extended the first two experiments using method-of-constant-stimuli. Together, these results show that the visual system is sensitive to emergent information about the emotional heterogeneity, or ambivalence, in crowds of faces. PMID:26676106
Analysis of variance of microarray data.
Ayroles, Julien F; Gibson, Greg
2006-01-01
Analysis of variance (ANOVA) is an approach used to identify differentially expressed genes in complex experimental designs. It is based on testing for the significance of the magnitude of effect of two or more treatments taking into account the variance within and between treatment classes. ANOVA is a highly flexible analytical approach that allows investigators to simultaneously assess the contributions of multiple factors to gene expression variation, including technical (dye, batch) effects and biological (sex, genotype, drug, time) ones, as well as interactions between factors. This chapter provides an overview of the theory of linear mixture modeling and the sequence of steps involved in fitting gene-specific models and discusses essential features of experimental design. Commercial and open-source software for performing ANOVA is widely available. PMID:16939792
PHD filtering with localised target number variance
NASA Astrophysics Data System (ADS)
Delande, Emmanuel; Houssineau, Jérémie; Clark, Daniel
2013-05-01
Mahler's Probability Hypothesis Density (PHD filter), proposed in 2000, addresses the challenges of the multipletarget detection and tracking problem by propagating a mean density of the targets in any region of the state space. However, when retrieving some local evidence on the target presence becomes a critical component of a larger process - e.g. for sensor management purposes - the local target number is insufficient unless some confidence on the estimation of the number of targets can be provided as well. In this paper, we propose a first implementation of a PHD filter that also includes an estimation of localised variance in the target number following each update step; we then illustrate the advantage of the PHD filter + variance on simulated data from a multiple-target scenario.
Applications of non-parametric statistics and analysis of variance on sample variances
NASA Technical Reports Server (NTRS)
Myers, R. H.
1981-01-01
Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.
Analysis of variance based on fuzzy observations
NASA Astrophysics Data System (ADS)
Nourbakhsh, M.; Mashinchi, M.; Parchami, A.
2013-04-01
Analysis of variance (ANOVA) is an important method in exploratory and confirmatory data analysis. The simplest type of ANOVA is one-way ANOVA for comparison among means of several populations. In this article, we extend one-way ANOVA to a case where observed data are fuzzy observations rather than real numbers. Two real-data examples are given to show the performance of this method.
The Theory of Variances in Equilibrium Reconstruction
Zakharov, Leonid E.; Lewandowski, Jerome; Foley, Elizabeth L.; Levinton, Fred M.; Yuh, Howard Y.; Drozdov, Vladimir; McDonald, Darren
2008-01-14
The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature.
Minimum variance and variance of outgoing quality limit MDS-1(c1, c2) plans
NASA Astrophysics Data System (ADS)
Raju, C.; Vidya, R.
2016-06-01
In this article, the outgoing quality (OQ) and total inspection (TI) of multiple deferred state sampling plans MDS-1(c1,c2) are studied. It is assumed that the inspection is rejection rectification. Procedures for designing MDS-1(c1,c2) sampling plans with minimum variance of OQ and TI are developed. A procedure for obtaining a plan for a designated upper limit for the variance of the OQ (VOQL) is outlined.
Hypothesis exploration with visualization of variance
2014-01-01
Background The Consortium for Neuropsychiatric Phenomics (CNP) at UCLA was an investigation into the biological bases of traits such as memory and response inhibition phenotypes—to explore whether they are linked to syndromes including ADHD, Bipolar disorder, and Schizophrenia. An aim of the consortium was in moving from traditional categorical approaches for psychiatric syndromes towards more quantitative approaches based on large-scale analysis of the space of human variation. It represented an application of phenomics—wide-scale, systematic study of phenotypes—to neuropsychiatry research. Results This paper reports on a system for exploration of hypotheses in data obtained from the LA2K, LA3C, and LA5C studies in CNP. ViVA is a system for exploratory data analysis using novel mathematical models and methods for visualization of variance. An example of these methods is called VISOVA, a combination of visualization and analysis of variance, with the flavor of exploration associated with ANOVA in biomedical hypothesis generation. It permits visual identification of phenotype profiles—patterns of values across phenotypes—that characterize groups. Visualization enables screening and refinement of hypotheses about variance structure of sets of phenotypes. Conclusions The ViVA system was designed for exploration of neuropsychiatric hypotheses by interdisciplinary teams. Automated visualization in ViVA supports ‘natural selection’ on a pool of hypotheses, and permits deeper understanding of the statistical architecture of the data. Large-scale perspective of this kind could lead to better neuropsychiatric diagnostics. PMID:25097666
Directional variance analysis of annual rings
NASA Astrophysics Data System (ADS)
Kumpulainen, P.; Marjanen, K.
2010-07-01
The wood quality measurement methods are of increasing importance in the wood industry. The goal is to produce more high quality products with higher marketing value than is produced today. One of the key factors for increasing the market value is to provide better measurements for increased information to support the decisions made later in the product chain. Strength and stiffness are important properties of the wood. They are related to mean annual ring width and its deviation. These indicators can be estimated from images taken from the log ends by two-dimensional power spectrum analysis. The spectrum analysis has been used successfully for images of pine. However, the annual rings in birch, for example are less distinguishable and the basic spectrum analysis method does not give reliable results. A novel method for local log end variance analysis based on Radon-transform is proposed. The directions and the positions of the annual rings can be estimated from local minimum and maximum variance estimates. Applying the spectrum analysis on the maximum local variance estimate instead of the original image produces more reliable estimate of the annual ring width. The proposed method is not limited to log end analysis only. It is usable in other two-dimensional random signal and texture analysis tasks.
Clarke, Peter; Varghese, Philip; Goldstein, David
2014-12-09
We extend a variance reduced discrete velocity method developed at UT Austin [1, 2] to gas mixtures with large mass ratios and flows with trace species. The mixture is stored as a collection of independent velocity distribution functions, each with a unique grid in velocity space. Different collision types (A-A, A-B, B-B, etc.) are treated independently, and the variance reduction scheme is formulated with different equilibrium functions for each separate collision type. The individual treatment of species enables increased focus on species important to the physics of the flow, even if the important species are present in trace amounts. The method is verified through comparisons to Direct Simulation Monte Carlo computations and the computational workload per time step is investigated for the variance reduced method.
NASA Astrophysics Data System (ADS)
Mahmud, Mohammad Sultan; Cadotte, David W.; Vuong, Barry; Sun, Carry; Luk, Timothy W. H.; Mariampillai, Adrian; Yang, Victor X. D.
2013-05-01
High-resolution mapping of microvasculature has been applied to diverse body systems, including the retinal and choroidal vasculature, cardiac vasculature, the central nervous system, and various tumor models. Many imaging techniques have been developed to address specific research questions, and each has its own merits and drawbacks. Understanding, optimization, and proper implementation of these imaging techniques can significantly improve the data obtained along the spectrum of unique research projects to obtain diagnostic clinical information. We describe the recently developed algorithms and applications of two general classes of microvascular imaging techniques: speckle-variance and phase-variance optical coherence tomography (OCT). We compare and contrast their performance with Doppler OCT and optical microangiography. In addition, we highlight ongoing work in the development of variance-based techniques to further refine the characterization of microvascular networks.
Visual SLAM Using Variance Grid Maps
NASA Technical Reports Server (NTRS)
Howard, Andrew B.; Marks, Tim K.
2011-01-01
An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance
The AdS particle [rapid communication
NASA Astrophysics Data System (ADS)
Ghosh, Subir
2005-09-01
In this Letter we have considered a relativistic Nambu-Goto model for a particle in AdS metric. With appropriate gauge choice to fix the reparameterization invariance, we recover the previously discussed [S. Ghosh, P. Pal, Phys. Lett. B 618 (2005) 243, arxiv:hep-th/0502192] "exotic oscillator". The Snyder algebra and subsequently the κ-Minkowski spacetime are also derived. Lastly we comment on the impossibility of constructing a non-commutative spacetime in the context of open string where only a curved target space is introduced.
Probing crunching AdS cosmologies
NASA Astrophysics Data System (ADS)
Kumar, S. Prem; Vaganov, Vladislav
2016-02-01
Holographic gravity duals of deformations of CFTs formulated on de Sitter spacetime contain FRW geometries behind a horizon, with cosmological big crunch singularities. Using a specific analytically tractable solution within a particular single scalar truncation of {N}=8 supergravity on AdS4, we first probe such crunching cosmologies with spacelike radial geodesics that compute spatially antipodal correlators of large dimension boundary operators. At late times, the geodesics lie on the FRW slice of maximal expansion behind the horizon. The late time two-point functions factorise, and when transformed to the Einstein static universe, they exhibit a temporal non-analyticity determined by the maximal value of the scale factor ã max. Radial geodesics connecting antipodal points necessarily have de Sitter energy Ɛ ≲ ã max, while geodesics with Ɛ > ã max terminate at the crunch, the two categories of geodesics being separated by the maximal expansion slice. The spacelike crunch singularity is curved "outward" in the Penrose diagram for the deformed AdS backgrounds, and thus geodesic limits of the antipodal correlators do not directly probe the crunch. Beyond the geodesic limit, we point out that the scalar wave equation, analytically continued into the FRW patch, has a potential which is singular at the crunch along with complex WKB turning points in the vicinity of the FRW crunch. We then argue that the frequency space Green's function has a branch point determined by ã max which corresponds to the lowest quasinormal frequency.
Estimators for variance components in structured stair nesting models
NASA Astrophysics Data System (ADS)
Monteiro, Sandra; Fonseca, Miguel; Carvalho, Francisco
2016-06-01
The purpose of this paper is to present the estimation of the components of variance in structured stair nesting models. The relationship between the canonical variance components and the original ones, will be very important in obtaining that estimators.
Short-term memory binding is impaired in AD but not in non-AD dementias.
Della Sala, Sergio; Parra, Mario A; Fabi, Katia; Luzzi, Simona; Abrahams, Sharon
2012-04-01
Binding is a cognitive function responsible for integrating features within complex stimuli (e.g., shape-colour conjunctions) or events within complex memories (e.g., face-name associations). This function operates both in short-term memory (STM) and in long-term memory (LTM) and is severely affected by Alzheimer's disease (AD). However, forming conjunctions in STM is the only binding function which is not affected by healthy ageing or chronic depression. Whether this specificity holds true across other non-AD dementias is as yet unknown. The present study investigated STM conjunctive binding in a sample of AD patients and patients with other non-AD dementias using a task which has proved sensitive to the effects of AD. The STM task assesses the free recall of objects, colours, and the bindings of objects and colours. Patients with AD, frontotemporal dementia, vascular dementia, lewy body dementia and dementia associated with Parkinson's disease showed memory, visuo-spatial, executive and attentional deficits on standard neuropsychological assessment. However, only AD patients showed STM binding deficits. This deficit was observed even when memory for single features was at a similar level across patient groups. Regression and discriminant analyses confirmed that the STM binding task accounted for the largest proportion of variance between AD and non-AD groups and held the greatest classification power to identify patients with AD. STM conjunctive binding places little demands on executive functions and appears to be subserved by components of the memory network which are targeted by AD, but not by non-AD dementias. PMID:22289292
40 CFR 124.62 - Decision on variances.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Decision on variances. 124.62 Section... FOR DECISIONMAKING Specific Procedures Applicable to NPDES Permits § 124.62 Decision on variances... following variances (subject to EPA objection under § 123.44 for State permits): (1) Extensions under...
40 CFR 59.509 - Can I get a variance?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Can I get a variance? 59.509 Section 59... Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a) Any... its reasonable control may apply in writing to the Administrator for a temporary variance....
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Variances and exceptions. 27... CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws may provide for variances and exceptions. (b) Bylaws adopted pursuant to these standards shall...
20 CFR 901.40 - Proof; variance; amendment of pleadings.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Proof; variance; amendment of pleadings. 901... Suspension or Termination of Enrollment § 901.40 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in a pleading and the evidence adduced in support of the pleading,...
31 CFR 10.67 - Proof; variance; amendment of pleadings.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Proof; variance; amendment of... BEFORE THE INTERNAL REVENUE SERVICE Rules Applicable to Disciplinary Proceedings § 10.67 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in pleadings and the...
7 CFR 718.105 - Tolerances, variances, and adjustments.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 7 2010-01-01 2010-01-01 false Tolerances, variances, and adjustments. 718.105... APPLICABLE TO MULTIPLE PROGRAMS Determination of Acreage and Compliance § 718.105 Tolerances, variances, and... marketing quota crop allotment. (d) An administrative variance is applicable to all allotment crop...
40 CFR 52.1390 - Missoula variance provision.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 4 2010-07-01 2010-07-01 false Missoula variance provision. 52.1390... (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) Montana § 52.1390 Missoula variance provision. The Missoula City-County Air Pollution Control Program's Chapter X, Variances, which was...
29 CFR 1905.5 - Effect of variances.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 5 2010-07-01 2010-07-01 false Effect of variances. 1905.5 Section 1905.5 Labor... RULES OF PRACTICE FOR VARIANCES, LIMITATIONS, VARIATIONS, TOLERANCES, AND EXEMPTIONS UNDER THE WILLIAMS-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 General § 1905.5 Effect of variances. All...
40 CFR 190.11 - Variances for unusual operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Variances for unusual operations. 190... Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified in § 190.10 may be exceeded if: (a) The regulatory agency has granted a variance based upon...
40 CFR 124.64 - Appeals of variances.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Appeals of variances. 124.64 Section... FOR DECISIONMAKING Specific Procedures Applicable to NPDES Permits § 124.64 Appeals of variances. (a) When a State issues a permit on which EPA has made a variance decision, separate appeals of the...
31 CFR 8.59 - Proof; variance; amendment of pleadings.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Proof; variance; amendment of... BEFORE THE BUREAU OF ALCOHOL, TOBACCO AND FIREARMS Disciplinary Proceedings § 8.59 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in a pleading, the...
36 CFR 30.5 - Variances, exceptions, and use permits.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Variances, exceptions, and... UNIT § 30.5 Variances, exceptions, and use permits. (a) Zoning ordinances or amendments thereto, for... Recreation Area may provide for the granting of variances and exceptions. (b) Zoning ordinances or...
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions. (a) Variances or exemptions from certain provisions...
29 CFR 1905.5 - Effect of variances.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 29 Labor 5 2014-07-01 2014-07-01 false Effect of variances. 1905.5 Section 1905.5 Labor...-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 General § 1905.5 Effect of variances. All variances granted pursuant to this part shall have only future effect. In his discretion, the Assistant...
NASA Astrophysics Data System (ADS)
Bena, Iosif; Heurtier, Lucien; Puhm, Andrea
2016-05-01
It was argued in [1] that the five-dimensional near-horizon extremal Kerr (NHEK) geometry can be embedded in String Theory as the infrared region of an infinite family of non-supersymmetric geometries that have D1, D5, momentum and KK monopole charges. We show that there exists a method to embed these geometries into asymptotically- {AdS}_3× {S}^3/{{Z}}_N solutions, and hence to obtain infinite families of flows whose infrared is NHEK. This indicates that the CFT dual to the NHEK geometry is the IR fixed point of a Renormalization Group flow from a known local UV CFT and opens the door to its explicit construction.
The Parabolic Variance (PVAR): A Wavelet Variance Based on the Least-Square Fit.
Vernotte, Francois; Lenczner, Michel; Bourgeois, Pierre-Yves; Rubiola, Enrico
2016-04-01
This paper introduces the parabolic variance (PVAR), a wavelet variance similar to the Allan variance (AVAR), based on the linear regression (LR) of phase data. The companion article arXiv:1506.05009 [physics.ins-det] details the Ω frequency counter, which implements the LR estimate. The PVAR combines the advantages of AVAR and modified AVAR (MVAR). PVAR is good for long-term analysis because the wavelet spans over 2τ, the same as the AVAR wavelet, and good for short-term analysis because the response to white and flicker PM is 1/τ(3) and 1/τ(2), the same as the MVAR. After setting the theoretical framework, we study the degrees of freedom and the confidence interval for the most common noise types. Then, we focus on the detection of a weak noise process at the transition-or corner-where a faster process rolls off. This new perspective raises the question of which variance detects the weak process with the shortest data record. Our simulations show that PVAR is a fortunate tradeoff. PVAR is superior to MVAR in all cases, exhibits the best ability to divide between fast noise phenomena (up to flicker FM), and is almost as good as AVAR for the detection of random walk and drift. PMID:26571523
Dynamics of mean-variance-skewness of cumulative crop yield impact temporal yield variance
Technology Transfer Automated Retrieval System (TEKTRAN)
Production risk associated with cropping systems influences farmers’ decisions to adopt a new management practice or a production system. Cumulative yield (CY), temporal yield variance (TYV) and coefficient of variation (CV) were used to assess the risk associated with adopting combinations of new m...
Shadows, currents, and AdS fields
Metsaev, R. R.
2008-11-15
Conformal totally symmetric arbitrary spin currents and shadow fields in flat space-time of dimension greater than or equal to four are studied. A gauge invariant formulation for such currents and shadow fields is developed. Gauge symmetries are realized by involving the Stueckelberg fields. A realization of global conformal boost symmetries is obtained. Gauge invariant differential constraints for currents and shadow fields are obtained. AdS/CFT correspondence for currents and shadow fields and the respective normalizable and non-normalizable solutions of massless totally symmetric arbitrary spin AdS fields are studied. The bulk fields are considered in a modified de Donder gauge that leads to decoupled equations of motion. We demonstrate that leftover on shell gauge symmetries of bulk fields correspond to gauge symmetries of boundary currents and shadow fields, while the modified de Donder gauge conditions for bulk fields correspond to differential constraints for boundary conformal currents and shadow fields. Breaking conformal symmetries, we find interrelations between the gauge invariant formulation of the currents and shadow fields, and the gauge invariant formulation of massive fields.
The variance of the adjusted Rand index.
Steinley, Douglas; Brusco, Michael J; Hubert, Lawrence
2016-06-01
For 30 years, the adjusted Rand index has been the preferred method for comparing 2 partitions (e.g., clusterings) of a set of observations. Although the index is widely used, little is known about its variability. Herein, the variance of the adjusted Rand index (Hubert & Arabie, 1985) is provided and its properties are explored. It is shown that a normal approximation is appropriate across a wide range of sample sizes and varying numbers of clusters. Further, it is shown that confidence intervals based on the normal distribution have desirable levels of coverage and accuracy. Finally, the first power analysis evaluating the ability to detect differences between 2, different adjusted Rand indices is provided. (PsycINFO Database Record PMID:26881693
Motion Detection Using Mean Normalized Temporal Variance
Chan, C W
2003-08-04
Scene-Based Wave Front Sensing uses the correlation between successive wavelets to determine the phase aberrations which cause the blurring of digital images. Adaptive Optics technology uses that information to control deformable mirrors to correct for the phase aberrations making the image clearer. The correlation between temporal subimages gives tip-tilt information. If these images do not have identical image content, tip-tilt estimations may be incorrect. Motion detection is necessary to help avoid errors initiated by dynamic subimage content. With a finely limited number of pixels per subaperature, most conventional motion detection algorithms fall apart on our subimages. Despite this fact, motion detection based on the normalized variance of individual pixels proved to be effective.
Calculating bone-lead measurement variance.
Todd, A C
2000-01-01
The technique of (109)Cd-based X-ray fluorescence (XRF) measurements of lead in bone is well established. A paper by some XRF researchers [Gordon CL, et al. The Reproducibility of (109)Cd-based X-ray Fluorescence Measurements of Bone Lead. Environ Health Perspect 102:690-694 (1994)] presented the currently practiced method for calculating the variance of an in vivo measurement once a calibration line has been established. This paper corrects typographical errors in the method published by those authors; presents a crude estimate of the measurement error that can be acquired without computational peak fitting programs; and draws attention to the measurement error attributable to covariance, an important feature in the construct of the currently accepted method that is flawed under certain circumstances. PMID:10811562
Variance-based interaction index measuring heteroscedasticity
NASA Astrophysics Data System (ADS)
Ito, Keiichi; Couckuyt, Ivo; Poles, Silvia; Dhaene, Tom
2016-06-01
This work is motivated by the need to deal with models with high-dimensional input spaces of real variables. One way to tackle high-dimensional problems is to identify interaction or non-interaction among input parameters. We propose a new variance-based sensitivity interaction index that can detect and quantify interactions among the input variables of mathematical functions and computer simulations. The computation is very similar to first-order sensitivity indices by Sobol'. The proposed interaction index can quantify the relative importance of input variables in interaction. Furthermore, detection of non-interaction for screening can be done with as low as 4 n + 2 function evaluations, where n is the number of input variables. Using the interaction indices based on heteroscedasticity, the original function may be decomposed into a set of lower dimensional functions which may then be analyzed separately.
Event Segmentation Ability Uniquely Predicts Event Memory
Sargent, Jesse Q.; Zacks, Jeffrey M.; Hambrick, David Z.; Zacks, Rose T.; Kurby, Christopher A.; Bailey, Heather R.; Eisenberg, Michelle L.; Beck, Taylor M.
2013-01-01
Memory for everyday events plays a central role in tasks of daily living, autobiographical memory, and planning. Event memory depends in part on segmenting ongoing activity into meaningful units. This study examined the relationship between event segmentation and memory in a lifespan sample to answer the following question: Is the ability to segment activity into meaningful events a unique predictor of subsequent memory, or is the relationship between event perception and memory accounted for by general cognitive abilities? Two hundred and eight adults ranging from 20 to 79 years old segmented movies of everyday events and attempted to remember the events afterwards. They also completed psychometric ability tests and tests measuring script knowledge for everyday events. Event segmentation and script knowledge both explained unique variance in event memory above and beyond the psychometric measures, and did so as strongly in older as in younger adults. These results suggest that event segmentation is a basic cognitive mechanism, important for memory across the lifespan. PMID:23942350
Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.
Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S
2016-04-01
Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity. PMID:26995641
Secure ADS-B authentication system and method
NASA Technical Reports Server (NTRS)
Viggiano, Marc J (Inventor); Valovage, Edward M (Inventor); Samuelson, Kenneth B (Inventor); Hall, Dana L (Inventor)
2010-01-01
A secure system for authenticating the identity of ADS-B systems, including: an authenticator, including a unique id generator and a transmitter transmitting the unique id to one or more ADS-B transmitters; one or more ADS-B transmitters, including a receiver receiving the unique id, one or more secure processing stages merging the unique id with the ADS-B transmitter's identification, data and secret key and generating a secure code identification and a transmitter transmitting a response containing the secure code and ADSB transmitter's data to the authenticator; the authenticator including means for independently determining each ADS-B transmitter's secret key, a receiver receiving each ADS-B transmitter's response, one or more secure processing stages merging the unique id, ADS-B transmitter's identification and data and generating a secure code, and comparison processing comparing the authenticator-generated secure code and the ADS-B transmitter-generated secure code and providing an authentication signal based on the comparison result.
Explanatory Variance in Maximal Oxygen Uptake
Robert McComb, Jacalyn J.; Roh, Daesung; Williams, James S.
2006-01-01
The purpose of this study was to develop a prediction equation that could be used to estimate maximal oxygen uptake (VO2max) from a submaximal water running protocol. Thirty-two volunteers (n =19 males, n = 13 females), ages 18 - 24 years, underwent the following testing procedures: (a) a 7-site skin fold assessment; (b) a land VO2max running treadmill test; and (c) a 6 min water running test. For the water running submaximal protocol, the participants were fitted with an Aqua Jogger Classic Uni-Sex Belt and a Polar Heart Rate Monitor; the participants’ head, shoulders, hips and feet were vertically aligned, using a modified running/bicycle motion. A regression model was used to predict VO2max. The criterion variable, VO2max, was measured using open-circuit calorimetry utilizing the Bruce Treadmill Protocol. Predictor variables included in the model were percent body fat (% BF), height, weight, gender, and heart rate following a 6 min water running protocol. Percent body fat accounted for 76% (r = -0.87, SEE = 3.27) of the variance in VO2max. No other variables significantly contributed to the explained variance in VO2max. The equation for the estimation of VO2max is as follows: VO2max ml.kg-1·min-1 = 56.14 - 0.92 (% BF). Key Points Body Fat is an important predictor of VO2 max. Individuals with low skill level in water running may shorten their stride length to avoid the onset of fatigue at higher work-loads, therefore, the net oxygen cost of the exercise cannot be controlled in inexperienced individuals in water running at fatiguing workloads. Experiments using water running protocols to predict VO2max should use individuals trained in the mechanics of water running. A submaximal water running protocol is needed in the research literature for individuals trained in the mechanics of water running, given the popularity of water running rehabilitative exercise programs and training programs. PMID:24260003
Cyclostationary analysis with logarithmic variance stabilisation
NASA Astrophysics Data System (ADS)
Borghesani, Pietro; Shahriar, Md Rifat
2016-03-01
Second order cyclostationary (CS2) components in vibration or acoustic emission signals are typical symptoms of a wide variety of faults in rotating and alternating mechanical systems. The square envelope spectrum (SES), obtained via Hilbert transform of the original signal, is at the basis of the most common indicators used for detection of CS2 components. It has been shown that the SES is equivalent to an autocorrelation of the signal's discrete Fourier transform, and that CS2 components are a cause of high correlations in the frequency domain of the signal, thus resulting in peaks in the SES. Statistical tests have been proposed to determine if peaks in the SES are likely to belong to a normal variability in the signal or if they are proper symptoms of CS2 components. Despite the need for automated fault recognition and the theoretical soundness of these tests, this approach to machine diagnostics has been mostly neglected in industrial applications. In fact, in a series of experimental applications, even with proper pre-whitening steps, it has been found that healthy machines might produce high spectral correlations and therefore result in a highly biased SES distribution which might cause a series of false positives. In this paper a new envelope spectrum is defined, with the theoretical intent of rendering the hypothesis test variance-free. This newly proposed indicator will prove unbiased in case of multiple CS2 sources of spectral correlation, thus reducing the risk of false alarms.
Automatic variance analysis of multistage care pathways.
Li, Xiang; Liu, Haifeng; Zhang, Shilei; Mei, Jing; Xie, Guotong; Yu, Yiqin; Li, Jing; Lakshmanan, Geetika T
2014-01-01
A care pathway (CP) is a standardized process that consists of multiple care stages, clinical activities and their relations, aimed at ensuring and enhancing the quality of care. However, actual care may deviate from the planned CP, and analysis of these deviations can help clinicians refine the CP and reduce medical errors. In this paper, we propose a CP variance analysis method to automatically identify the deviations between actual patient traces in electronic medical records (EMR) and a multistage CP. As the care stage information is usually unavailable in EMR, we first align every trace with the CP using a hidden Markov model. From the aligned traces, we report three types of deviations for every care stage: additional activities, absent activities and violated constraints, which are identified by using the techniques of temporal logic and binomial tests. The method has been applied to a CP for the management of congestive heart failure and real world EMR, providing meaningful evidence for the further improvement of care quality. PMID:25160280
NASA Technical Reports Server (NTRS)
Clauson, J.; Heuser, J.
1981-01-01
The Applications Data Service (ADS) is a system based on an electronic data communications network which will permit scientists to share the data stored in data bases at universities and at government and private installations. It is designed to allow users to readily locate and access high quality, timely data from multiple sources. The ADS Pilot program objectives and the current plans for accomplishing those objectives are described.
Exploring Unique Roles for Psychologists
ERIC Educational Resources Information Center
Ahmed, Mohiuddin; Boisvert, Charles M.
2005-01-01
This paper presents comments on "Psychological Treatments" by D. H. Barlow. Barlow highlighted unique roles that psychologists can play in mental health service delivery by providing psychological treatments--treatments that psychologists would be uniquely qualified to design and deliver. In support of Barlow's position, the authors draw from…
ERIC Educational Resources Information Center
Shipman, Barbara A.
2013-01-01
This article analyzes four questions on the meaning of uniqueness that have contrasting answers in common language versus mathematical language. The investigations stem from a scenario in which students interpreted uniqueness according to a definition from standard English, that is, different from the mathematical meaning, in defining an injective…
Estimating the encounter rate variance in distance sampling
Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.
2009-01-01
The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.
Variance analysis. Part II, The use of computers.
Finkler, S A
1991-09-01
This is the second in a two-part series on variance analysis. In the first article (JONA, July/August 1991), the author discussed flexible budgeting, including the calculation of price, quantity, volume, and acuity variances. In this second article, the author focuses on the use of computers by nurse managers to aid in the process of calculating, understanding, and justifying variances. PMID:1919788
Functional Analysis of Variance for Association Studies
Vsevolozhskaya, Olga A.; Zaykin, Dmitri V.; Greenwood, Mark C.; Wei, Changshuai; Lu, Qing
2014-01-01
While progress has been made in identifying common genetic variants associated with human diseases, for most of common complex diseases, the identified genetic variants only account for a small proportion of heritability. Challenges remain in finding additional unknown genetic variants predisposing to complex diseases. With the advance in next-generation sequencing technologies, sequencing studies have become commonplace in genetic research. The ongoing exome-sequencing and whole-genome-sequencing studies generate a massive amount of sequencing variants and allow researchers to comprehensively investigate their role in human diseases. The discovery of new disease-associated variants can be enhanced by utilizing powerful and computationally efficient statistical methods. In this paper, we propose a functional analysis of variance (FANOVA) method for testing an association of sequence variants in a genomic region with a qualitative trait. The FANOVA has a number of advantages: (1) it tests for a joint effect of gene variants, including both common and rare; (2) it fully utilizes linkage disequilibrium and genetic position information; and (3) allows for either protective or risk-increasing causal variants. Through simulations, we show that FANOVA outperform two popularly used methods – SKAT and a previously proposed method based on functional linear models (FLM), – especially if a sample size of a study is small and/or sequence variants have low to moderate effects. We conduct an empirical study by applying three methods (FANOVA, SKAT and FLM) to sequencing data from Dallas Heart Study. While SKAT and FLM respectively detected ANGPTL 4 and ANGPTL 3 associated with obesity, FANOVA was able to identify both genes associated with obesity. PMID:25244256
Network Structure and Biased Variance Estimation in Respondent Driven Sampling
Verdery, Ashton M.; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J.
2015-01-01
This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927
Estimation of Variance Components of Quantitative Traits in Inbred Populations
Abney, Mark; McPeek, Mary Sara; Ober, Carole
2000-01-01
Summary Use of variance-component estimation for mapping of quantitative-trait loci in humans is a subject of great current interest. When only trait values, not genotypic information, are considered, variance-component estimation can also be used to estimate heritability of a quantitative trait. Inbred pedigrees present special challenges for variance-component estimation. First, there are more variance components to be estimated in the inbred case, even for a relatively simple model including additive, dominance, and environmental effects. Second, more identity coefficients need to be calculated from an inbred pedigree in order to perform the estimation, and these are computationally more difficult to obtain in the inbred than in the outbred case. As a result, inbreeding effects have generally been ignored in practice. We describe here the calculation of identity coefficients and estimation of variance components of quantitative traits in large inbred pedigrees, using the example of HDL in the Hutterites. We use a multivariate normal model for the genetic effects, extending the central-limit theorem of Lange to allow for both inbreeding and dominance under the assumptions of our variance-component model. We use simulated examples to give an indication of under what conditions one has the power to detect the additional variance components and to examine their impact on variance-component estimation. We discuss the implications for mapping and heritability estimation by use of variance components in inbred populations. PMID:10677322
Multiperiod Mean-Variance Portfolio Optimization via Market Cloning
Ankirchner, Stefan; Dermoune, Azzouz
2011-08-15
The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.
Network Structure and Biased Variance Estimation in Respondent Driven Sampling.
Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J
2015-01-01
This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927
Assured Information Sharing for Ad-Hoc Collaboration
ERIC Educational Resources Information Center
Jin, Jing
2009-01-01
Collaborative information sharing tends to be highly dynamic and often ad hoc among organizations. The dynamic natures and sharing patterns in ad-hoc collaboration impose a need for a comprehensive and flexible approach to reflecting and coping with the unique access control requirements associated with the environment. This dissertation…
Uniqueness of the momentum map
NASA Astrophysics Data System (ADS)
Esposito, Chiara; Nest, Ryszard
2016-08-01
We give a detailed discussion of existence and uniqueness of the momentum map associated to Poisson Lie actions, which was defined by Lu. We introduce a weaker notion of momentum map, called infinitesimal momentum map, which is defined on one-forms and we analyze its integrability to the Lu's momentum map. Finally, the uniqueness of the Lu's momentum map is studied by describing, explicitly, the tangent space to the space of momentum maps.
The Placenta Harbors a Unique Microbiome
Aagaard, Kjersti; Ma, Jun; Antony, Kathleen M.; Ganu, Radhika; Petrosino, Joseph; Versalovic, James
2016-01-01
Humans and their microbiomes have coevolved as a physiologic community composed of distinct body site niches with metabolic and antigenic diversity. The placental microbiome has not been robustly interrogated, despite recent demonstrations of intracellular bacteria with diverse metabolic and immune regulatory functions. A population-based cohort of placental specimens collected under sterile conditions from 320 subjects with extensive clinical data was established for comparative 16S ribosomal DNA–based and whole-genome shotgun (WGS) metagenomic studies. Identified taxa and their gene carriage patterns were compared to other human body site niches, including the oral, skin, airway (nasal), vaginal, and gut microbiomes from nonpregnant controls. We characterized a unique placental microbiome niche, composed of nonpathogenic commensal microbiota from the Firmicutes, Tenericutes, Proteobacteria, Bacteroidetes, and Fusobacteria phyla. In aggregate, the placental microbiome profiles were most akin (Bray-Curtis dissimilarity <0.3) to the human oral microbiome. 16S-based operational taxonomic unit analyses revealed associations of the placental microbiome with a remote history of antenatal infection (permutational multivariate analysis of variance, P = 0.006), such as urinary tract infection in the first trimester, as well as with preterm birth <37 weeks (P = 0.001). PMID:24848255
ERIC Educational Resources Information Center
Castellanos-Ryan, Natalie; Conrod, Patricia J.
2011-01-01
Externalising behaviours such as substance misuse (SM) and conduct disorder (CD) symptoms highly co-ocurr in adolescence. While disinhibited personality traits have been consistently linked to externalising behaviours there is evidence that these traits may relate differentially to SM and CD. The current study aimed to assess whether this was the…
Uniqueness of place: uniqueness of models. The FLEX modelling approach
NASA Astrophysics Data System (ADS)
Fenicia, F.; Savenije, H. H. G.; Wrede, S.; Schoups, G.; Pfister, L.
2009-04-01
The current practice in hydrological modelling is to make use of model structures that are fixed and a-priori defined. However, for a model to reflect uniqueness of place while maintaining parsimony, it is necessary to be flexible in its architecture. We have developed a new approach for the development and testing of hydrological models, named the FLEX approach. This approach allows the formulation of alternative model structures that vary in configuration and complexity, and uses an objective method for testing and comparing model performance. We have tested this approach on three headwater catchments in Luxembourg with marked differences in hydrological response, where we have generated 15 alternative model structures. Each of the three catchments is best represented by a different model architecture. Our results clearly show that uniqueness of place necessarily leads to uniqueness of models.
Innovations Without Added Costs
ERIC Educational Resources Information Center
Cereghino, Edward
1974-01-01
There is no question that we are in a tight money market, and schools are among the first institutions to feel the squeeze. Therefore, when a plan is offered that provides for innovations without added costs, its something worth noting. (Editor)
ERIC Educational Resources Information Center
Richards, Andrew
2015-01-01
Two quantitative measures of school performance are currently used, the average points score (APS) at Key Stage 2 and value-added (VA), which measures the rate of academic improvement between Key Stage 1 and 2. These figures are used by parents and the Office for Standards in Education to make judgements and comparisons. However, simple…
7 CFR 718.105 - Tolerances, variances, and adjustments.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 7 2014-01-01 2014-01-01 false Tolerances, variances, and adjustments. 718.105 Section 718.105 Agriculture Regulations of the Department of Agriculture (Continued) FARM SERVICE AGENCY... APPLICABLE TO MULTIPLE PROGRAMS Determination of Acreage and Compliance § 718.105 Tolerances, variances,...
7 CFR 718.105 - Tolerances, variances, and adjustments.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 7 2012-01-01 2012-01-01 false Tolerances, variances, and adjustments. 718.105 Section 718.105 Agriculture Regulations of the Department of Agriculture (Continued) FARM SERVICE AGENCY... APPLICABLE TO MULTIPLE PROGRAMS Determination of Acreage and Compliance § 718.105 Tolerances, variances,...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 36 Parks, Forests, and Public Property 1 2013-07-01 2013-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 36 Parks, Forests, and Public Property 1 2014-07-01 2014-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 36 Parks, Forests, and Public Property 1 2011-07-01 2011-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
36 CFR 27.4 - Variances and exceptions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 36 Parks, Forests, and Public Property 1 2012-07-01 2012-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...
Variances and Covariances of Kendall's Tau and Their Estimation.
ERIC Educational Resources Information Center
Cliff, Norman; Charlin, Ventura
1991-01-01
Variance formulas of H. E. Daniels and M. G. Kendall (1947) are generalized to allow for the presence of ties and variance of the sample tau correlation. Applications of these generalized formulas are discussed and illustrated using data from a 1965 study of contraceptive use in 15 developing countries. (SLD)
Characterizing the evolution of genetic variance using genetic covariance tensors.
Hine, Emma; Chenoweth, Stephen F; Rundle, Howard D; Blows, Mark W
2009-06-12
Determining how genetic variance changes under selection in natural populations has proved to be a very resilient problem in evolutionary genetics. In the same way that understanding the availability of genetic variance within populations requires the simultaneous consideration of genetic variance in sets of functionally related traits, determining how genetic variance changes under selection in natural populations will require ascertaining how genetic variance-covariance (G) matrices evolve. Here, we develop a geometric framework using higher-order tensors, which enables the empirical characterization of how G matrices have diverged among populations. We then show how divergence among populations in genetic covariance structure can then be associated with divergence in selection acting on those traits using key equations from evolutionary theory. Using estimates of G matrices of eight male sexually selected traits from nine geographical populations of Drosophila serrata, we show that much of the divergence in genetic variance occurred in a single trait combination, a conclusion that could not have been reached by examining variation among the individual elements of the nine G matrices. Divergence in G was primarily in the direction of the major axes of genetic variance within populations, suggesting that genetic drift may be a major cause of divergence in genetic variance among these populations. PMID:19414471
40 CFR 52.1390 - Missoula variance provision.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 4 2014-07-01 2014-07-01 false Missoula variance provision. 52.1390 Section 52.1390 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) Montana § 52.1390 Missoula variance provision. The Missoula City-County...
A Computer Program to Determine Reliability Using Analysis of Variance
ERIC Educational Resources Information Center
Burns, Edward
1976-01-01
A computer program, written in Fortran IV, is described which assesses reliability by using analysis of variance. It produces a complete analysis of variance table in addition to reliability coefficients for unadjusted and adjusted data as well as the intraclass correlation for m subjects and n items. (Author)
29 CFR 1904.38 - Variances from the recordkeeping rule.
Code of Federal Regulations, 2010 CFR
2010-07-01
... process your variance petition. (i) The Assistant Secretary will offer your employees and their authorized... the facts or conduct that may warrant revocation of your variance; and (ii) Provide you, your employees, and authorized employee representatives with an opportunity to participate in the...
Productive Failure in Learning the Concept of Variance
ERIC Educational Resources Information Center
Kapur, Manu
2012-01-01
In a study with ninth-grade mathematics students on learning the concept of variance, students experienced either direct instruction (DI) or productive failure (PF), wherein they were first asked to generate a quantitative index for variance without any guidance before receiving DI on the concept. Whereas DI students relied only on the canonical…
10 CFR 52.93 - Exemptions and variances.
Code of Federal Regulations, 2010 CFR
2010-01-01
... CFR 52.7, and that the special circumstances outweigh any decrease in safety that may result from the... 10 Energy 2 2010-01-01 2010-01-01 false Exemptions and variances. 52.93 Section 52.93 Energy... Combined Licenses § 52.93 Exemptions and variances. (a) Applicants for a combined license under...
Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances
ERIC Educational Resources Information Center
Jan, Show-Li; Shieh, Gwowen
2014-01-01
The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…
A Study of Variance Estimation Methods. Working Paper Series.
ERIC Educational Resources Information Center
Zhang, Fan; Weng, Stanley; Salvucci, Sameena; Hu, Ming-xiu
This working paper contains reports of five studies of variance estimation methods. The first, An Empirical Study of Poststratified Estimator, by Fan Zhang uses data from the National Household Education Survey to illustrate use of poststratified estimation. The second paper, BRR Variance Estimation Using BPLX Hadamard Procedure, by Stanley Weng…
21 CFR 821.2 - Exemptions and variances.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Exemptions and variances. 821.2 Section 821.2 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES MEDICAL DEVICE TRACKING REQUIREMENTS General Provisions § 821.2 Exemptions and variances. (a) A manufacturer, importer, or distributor...
40 CFR 142.40 - Requirements for a variance.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Section 142.40 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator... one or more variances to any public water system within a State that does not have primary...
Relating the Hadamard Variance to MCS Kalman Filter Clock Estimation
NASA Technical Reports Server (NTRS)
Hutsell, Steven T.
1996-01-01
The Global Positioning System (GPS) Master Control Station (MCS) currently makes significant use of the Allan Variance. This two-sample variance equation has proven excellent as a handy, understandable tool, both for time domain analysis of GPS cesium frequency standards, and for fine tuning the MCS's state estimation of these atomic clocks. The Allan Variance does not explicitly converge for the nose types of alpha less than or equal to minus 3 and can be greatly affected by frequency drift. Because GPS rubidium frequency standards exhibit non-trivial aging and aging noise characteristics, the basic Allan Variance analysis must be augmented in order to (a) compensate for a dynamic frequency drift, and (b) characterize two additional noise types, specifically alpha = minus 3, and alpha = minus 4. As the GPS program progresses, we will utilize a larger percentage of rubidium frequency standards than ever before. Hence, GPS rubidium clock characterization will require more attention than ever before. The three sample variance, commonly referred to as a renormalized Hadamard Variance, is unaffected by linear frequency drift, converges for alpha is greater than minus 5, and thus has utility for modeling noise in GPS rubidium frequency standards. This paper demonstrates the potential of Hadamard Variance analysis in GPS operations, and presents an equation that relates the Hadamard Variance to the MCS's Kalman filter process noises.
40 CFR 142.43 - Disposition of a variance request.
Code of Federal Regulations, 2010 CFR
2010-07-01
....43 Section 142.43 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the... variance may be terminated at any time upon a finding that the nature of the raw water source is such...
40 CFR 142.43 - Disposition of a variance request.
Code of Federal Regulations, 2011 CFR
2011-07-01
....43 Section 142.43 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the... variance may be terminated at any time upon a finding that the nature of the raw water source is such...
40 CFR 142.43 - Disposition of a variance request.
Code of Federal Regulations, 2012 CFR
2012-07-01
....43 Section 142.43 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the... variance may be terminated at any time upon a finding that the nature of the raw water source is such...
40 CFR 142.43 - Disposition of a variance request.
Code of Federal Regulations, 2013 CFR
2013-07-01
....43 Section 142.43 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the... variance may be terminated at any time upon a finding that the nature of the raw water source is such...
NASA Astrophysics Data System (ADS)
Accomazzi, Alberto; Henneken, E.; Grant, C. S.; Kurtz, M. J.; Di Milia, G.; Luker, J.; Thompson, D. M.; Bohlen, E.; Murray, S. S.
2011-05-01
ADS Labs is a platform that ADS is introducing in order to test and receive feedback from the community on new technologies and prototype services. Currently, ADS Labs features a new interface for abstract searches, faceted filtering of results, visualization of co-authorship networks, article-level recommendations, and a full-text search service. The streamlined abstract search interface provides a simple, one-box search with options for ranking results based on a paper relevancy, freshness, number of citations, and downloads. In addition, it provides advanced rankings based on collaborative filtering techniques. The faceted filtering interface allows users to narrow search results based on a particular property or set of properties ("facets"), allowing users to manage large lists and explore the relationship between them. For any set or sub-set of records, the co-authorship network can be visualized in an interactive way, offering a view of the distribution of contributors and their inter-relationships. This provides an immediate way to detect groups and collaborations involved in a particular research field. For a majority of papers in Astronomy, our new interface will provide a list of related articles of potential interest. The recommendations are based on a number of factors, including text similarity, citations, and co-readership information. The new full-text search interface allows users to find all instances of particular words or phrases in the body of the articles in our full-text archive. This includes all of the scanned literature in ADS as well as a select portion of the current astronomical literature, including ApJ, ApJS, AJ, MNRAS, PASP, A&A, and soon additional content from Springer journals. Fulltext search results include a list of the matching papers as well as a list of "snippets" of text highlighting the context in which the search terms were found. ADS Labs is available at http://adslabs.org
An efficient method to evaluate energy variances for extrapolation methods
NASA Astrophysics Data System (ADS)
Puddu, G.
2012-08-01
The energy variance extrapolation method consists of relating the approximate energies in many-body calculations to the corresponding energy variances and inferring eigenvalues by extrapolating to zero variance. The method needs a fast evaluation of the energy variances. For many-body methods that expand the nuclear wavefunctions in terms of deformed Slater determinants, the best available method for the evaluation of energy variances scales with the sixth power of the number of single-particle states. We propose a new method which depends on the number of single-particle orbits and the number of particles rather than the number of single-particle states. We discuss as an example the case of 4He using the chiral N3LO interaction in a basis consisting up to 184 single-particle states.
Utility functions predict variance and skewness risk preferences in monkeys
Genest, Wilfried; Stauffer, William R.; Schultz, Wolfram
2016-01-01
Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals’ preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals’ preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys’ choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences. PMID:27402743
Variance After-Effects Distort Risk Perception in Humans.
Payzan-LeNestour, Elise; Balleine, Bernard W; Berrada, Tony; Pearson, Joel
2016-06-01
In many contexts, decision-making requires an accurate representation of outcome variance-otherwise known as "risk" in economics. Conventional economic theory assumes this representation to be perfect, thereby focusing on risk preferences rather than risk perception per se [1-3] (but see [4]). However, humans often misrepresent their physical environment. Perhaps the most striking of such misrepresentations are the many well-known sensory after-effects, which most commonly involve visual properties, such as color, contrast, size, and motion. For example, viewing downward motion of a waterfall induces the anomalous biased experience of upward motion during subsequent viewing of static rocks to the side [5]. Given that after-effects are pervasive, occurring across a wide range of time horizons [6] and stimulus dimensions (including properties such as face perception [7, 8], gender [9], and numerosity [10]), and that some evidence exists that neurons show adaptation to variance in the sole visual feature of motion [11], we were interested in assessing whether after-effects distort variance perception in humans. We found that perceived variance is decreased after prolonged exposure to high variance and increased after exposure to low variance within a number of different visual representations of variance. We demonstrate these after-effects occur across very different visual representations of variance, suggesting that these effects are not sensory, but operate at a high (cognitive) level of information processing. These results suggest, therefore, that variance constitutes an independent cognitive property and that prolonged exposure to extreme variance distorts risk perception-a fundamental challenge for economic theory and practice. PMID:27161500
Code of Federal Regulations, 2014 CFR
2014-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
Code of Federal Regulations, 2013 CFR
2013-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
Code of Federal Regulations, 2012 CFR
2012-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
Code of Federal Regulations, 2011 CFR
2011-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
Code of Federal Regulations, 2010 CFR
2010-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...
Modeling variance structure of body shape traits of Lipizzan horses.
Kaps, M; Curik, I; Baban, M
2010-09-01
Heterogeneity of variance of growth traits over age is a common issue in estimating genetic parameters and is addressed in this study by selecting appropriate variance structure models for additive genetic and environmental variances. Modeling and partitioning those variances connected with analyzing small data sets were demonstrated on Lipizzan horses. The following traits were analyzed: withers height, chest girth, and cannon bone circumference. The measurements were taken at birth, and at approximately 6, 12, 24, and 36 mo of age of 660 Lipizzan horses born in Croatia between 1948 and 2000. The corresponding pedigree file consisted of 1,458 horses. Sex, age of dam, and stud-year-season interaction were considered fixed effects; additive genetic and permanent environment effects were defined as random. Linear adjustments of age at measuring were done within measuring groups. Maternal effects were included only for measurements taken at birth and at 6 mo. Additive genetic variance structures were modeled by using uniform structures or structures based on polynomial random regression. Environmental variance structures were modeled by using one of the following models: unstructured, exponential, Gaussian, or combinations of identity or diagonal with structures based on polynomial random regression. The parameters were estimated by using REML. Comparison and fits of the models were assessed by using Akaike and Bayesian information criteria, and by checking graphically the adequacy of the shape of the overall (phenotypic) and component (additive genetic and environmental) variance functions. The best overall fit was obtained from models with unstructured error variance. Compared with the model with uniform additive genetic variance, models with structures based on random regression only slightly improved overall fit. Exponential and Gaussian models were generally not suitable because they do not accommodate adequately heterogeneity of variance. Using the unstructured
The liberal illusion of uniqueness.
Stern, Chadly; West, Tessa V; Schmitt, Peter G
2014-01-01
In two studies, we demonstrated that liberals underestimate their similarity to other liberals (i.e., display truly false uniqueness), whereas moderates and conservatives overestimate their similarity to other moderates and conservatives (i.e., display truly false consensus; Studies 1 and 2). We further demonstrated that a fundamental difference between liberals and conservatives in the motivation to feel unique explains this ideological distinction in the accuracy of estimating similarity (Study 2). Implications of the accuracy of consensus estimates for mobilizing liberal and conservative political movements are discussed. PMID:24247730
Meta-analysis of ratios of sample variances.
Prendergast, Luke A; Staudte, Robert G
2016-05-20
When conducting a meta-analysis of standardized mean differences (SMDs), it is common to use Cohen's d, or its variants, that require equal variances in the two arms of each study. While interpretation of these SMDs is simple, this alone should not be used as a justification for assuming equal variances. Until now, researchers have either used an F-test for each individual study or perhaps even conveniently ignored such tools altogether. In this paper, we propose a meta-analysis of ratios of sample variances to assess whether the equality of variances assumptions is justified prior to a meta-analysis of SMDs. Quantile-quantile plots, an omnibus test for equal variances or an overall meta-estimate of the ratio of variances can all be used to formally justify the use of less common methods when evidence of unequal variances is found. The methods in this paper are simple to implement and the validity of the approaches are reinforced by simulation studies and an application to a real data set. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27062644
A note on preliminary tests of equality of variances.
Zimmerman, Donald W
2004-05-01
Preliminary tests of equality of variances used before a test of location are no longer widely recommended by statisticians, although they persist in some textbooks and software packages. The present study extends the findings of previous studies and provides further reasons for discontinuing the use of preliminary tests. The study found Type I error rates of a two-stage procedure, consisting of a preliminary Levene test on samples of different sizes with unequal variances, followed by either a Student pooled-variances t test or a Welch separate-variances t test. Simulations disclosed that the twostage procedure fails to protect the significance level and usually makes the situation worse. Earlier studies have shown that preliminary tests often adversely affect the size of the test, and also that the Welch test is superior to the t test when variances are unequal. The present simulations reveal that changes in Type I error rates are greater when sample sizes are smaller, when the difference in variances is slight rather than extreme, and when the significance level is more stringent. Furthermore, the validity of the Welch test deteriorates if it is used only on those occasions where a preliminary test indicates it is needed. Optimum protection is assured by using a separate-variances test unconditionally whenever sample sizes are unequal. PMID:15171807
NASA Astrophysics Data System (ADS)
Goodman, Alyssa
We will create the first interactive sky map of astronomers' understanding of the Universe over time. We will accomplish this goal by turning the NASA Astrophysics Data System (ADS), widely known for its unrivaled value as a literature resource, into a data resource. GIS and GPS systems have made it commonplace to see and explore information about goings-on on Earth in the context of maps and timelines. Our proposal shows an example of a program that lets a user explore which countries have been mentioned in the New York Times, on what dates, and in what kinds of articles. By analogy, the goal of our project is to enable this kind of exploration-on the sky-for the full corpus of astrophysical literature available through ADS. Our group's expertise and collaborations uniquely position us to create this interactive sky map of the literature, which we call the "ADS All-Sky Survey." To create this survey, here are the principal steps we need to follow. First, by analogy to "geotagging," we will "astrotag," the ADS literature. Many "astrotags" effectively already exist, thanks to curation efforts at both CDS and NED. These efforts have created links to "source" positions on the sky associated with each of the millions of articles in the ADS. Our collaboration with ADS and CDS will let us automatically extract astrotags for all existing and future ADS holdings. The new ADS Labs, which our group helps to develop, includes the ability for researchers to filter article search results using a variety of "facets" (e.g. sources, keywords, authors, observatories, etc.). Using only extracted astrotags and facets, we can create functionality like what is described in the Times example above: we can offer a map of the density of positions' "mentions" on the sky, filterable by the properties of those mentions. Using this map, researchers will be able to interactively, visually, discover what regions have been studied for what reasons, at what times, and by whom. Second, where
Variance Estimation for Myocardial Blood Flow by Dynamic PET.
Moody, Jonathan B; Murthy, Venkatesh L; Lee, Benjamin C; Corbett, James R; Ficaro, Edward P
2015-11-01
The estimation of myocardial blood flow (MBF) by (13)N-ammonia or (82)Rb dynamic PET typically relies on an empirically determined generalized Renkin-Crone equation to relate the kinetic parameter K1 to MBF. Because the Renkin-Crone equation defines MBF as an implicit function of K1, the MBF variance cannot be determined using standard error propagation techniques. To overcome this limitation, we derived novel analytical approximations that provide first- and second-order estimates of MBF variance in terms of the mean and variance of K1 and the Renkin-Crone parameters. The accuracy of the analytical expressions was validated by comparison with Monte Carlo simulations, and MBF variance was evaluated in clinical (82)Rb dynamic PET scans. For both (82)Rb and (13)N-ammonia, good agreement was observed between both (first- and second-order) analytical variance expressions and Monte Carlo simulations, with moderately better agreement for second-order estimates. The contribution of the Renkin-Crone relation to overall MBF uncertainty was found to be as high as 68% for (82)Rb and 35% for (13)N-ammonia. For clinical (82)Rb PET data, the conventional practice of neglecting the statistical uncertainty in the Renkin-Crone parameters resulted in underestimation of the coefficient of variation of global MBF and coronary flow reserve by 14-49%. Knowledge of MBF variance is essential for assessing the precision and reliability of MBF estimates. The form and statistical uncertainty in the empirical Renkin-Crone relation can make substantial contributions to the variance of MBF. The novel analytical variance expressions derived in this work enable direct estimation of MBF variance which includes this previously neglected contribution. PMID:25974932
Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation
NASA Technical Reports Server (NTRS)
Wu, Dong L.; Eckermann, Stephen D.
2008-01-01
The gravity wave (GW)-resolving capabilities of 118-GHz saturated thermal radiances acquired throughout the stratosphere by the Microwave Limb Sounder (MLS) on the Aura satellite are investigated and initial results presented. Because the saturated (optically thick) radiances resolve GW perturbations from a given altitude at different horizontal locations, variances are evaluated at 12 pressure altitudes between 21 and 51 km using the 40 saturated radiances found at the bottom of each limb scan. Forward modeling simulations show that these variances are controlled mostly by GWs with vertical wavelengths z 5 km and horizontal along-track wavelengths of y 100-200 km. The tilted cigar-shaped three-dimensional weighting functions yield highly selective responses to GWs of high intrinsic frequency that propagate toward the instrument. The latter property is used to infer the net meridional component of GW propagation by differencing the variances acquired from ascending (A) and descending (D) orbits. Because of improved vertical resolution and sensitivity, Aura MLS GW variances are 5?8 times larger than those from the Upper Atmosphere Research Satellite (UARS) MLS. Like UARS MLS variances, monthly-mean Aura MLS variances in January and July 2005 are enhanced when local background wind speeds are large, due largely to GW visibility effects. Zonal asymmetries in variance maps reveal enhanced GW activity at high latitudes due to forcing by flow over major mountain ranges and at tropical and subtropical latitudes due to enhanced deep convective generation as inferred from contemporaneous MLS cloud-ice data. At 21-28-km altitude (heights not measured by the UARS MLS), GW variance in the tropics is systematically enhanced and shows clear variations with the phase of the quasi-biennial oscillation, in general agreement with GW temperature variances derived from radiosonde, rocketsonde, and limb-scan vertical profiles.
Speckle-scale focusing in the diffusive regime with time reversal of variance-encoded light (TROVE)
NASA Astrophysics Data System (ADS)
Judkewitz, Benjamin; Wang, Ying Min; Horstmeyer, Roarke; Mathy, Alexandre; Yang, Changhuei
2013-04-01
Focusing of light in the diffusive regime inside scattering media has long been considered impossible. Recently, this limitation has been overcome with time reversal of ultrasound-encoded light (TRUE), but the resolution of this approach is fundamentally limited by the large number of optical modes within the ultrasound focus. Here, we introduce a new approach, time reversal of variance-encoded light (TROVE), which demixes these spatial modes by variance encoding to break the resolution barrier imposed by the ultrasound. By encoding individual spatial modes inside the scattering sample with unique variances, we effectively uncouple the system resolution from the size of the ultrasound focus. This enables us to demonstrate optical focusing and imaging with diffuse light at an unprecedented, speckle-scale lateral resolution of ~5 µm.
Two Virasoro symmetries in stringy warped AdS3
NASA Astrophysics Data System (ADS)
Compère, Geoffrey; Guica, Monica; Rodriguez, Maria J.
2014-12-01
We study three-dimensional consistent truncations of type IIB supergravity which admit warped AdS3 solutions. These theories contain subsectors that have no bulk dynamics. We show that the symplectic form for these theories, when restricted to the non-dynamical subsectors, equals the symplectic form for pure Einstein gravity in AdS3. Consequently, for each consistent choice of boundary conditions in AdS3, we can define a consistent phase space in warped AdS3 with identical conserved charges. This way, we easily obtain a Virasoro × Virasoro asymptotic symmetry algebra in warped AdS3; two different types of Virasoro × Kač-Moody symmetries are also consistent alternatives.
Uniquely identifying wheat plant structures
Technology Transfer Automated Retrieval System (TEKTRAN)
Uniquely naming wheat (Triticum aestivum L. em Thell) plant parts is useful for communicating plant development research and the effects of environmental stresses on normal wheat development. Over the past 30+ years, several naming systems have been proposed for wheat shoot, leaf, spike, spikelet, ...
Identity Foreclosure: A Unique Challenge
ERIC Educational Resources Information Center
Petitpas, Al
1978-01-01
Foreclosure occurs when individuals prematurely make a firm commitment to an occupation or an ideology. If the pressure of having an occupational identity can be eased, then it may be possible to establish an environment in which foreclosed students could move toward the consolidation of their unique identities. (Author)
... Multiple Health Problems Prevention Join our e-newsletter! Aging & Health A to Z COPD Unique to Older Adults This section provides information ... not a weakness or a normal part of aging. Most people feel better with ... help you can, so that your COPD does not prevent you from living your life ...
Mesoscale Gravity Wave Variances from AMSU-A Radiances
NASA Technical Reports Server (NTRS)
Wu, Dong L.
2004-01-01
A variance analysis technique is developed here to extract gravity wave (GW) induced temperature fluctuations from NOAA AMSU-A (Advanced Microwave Sounding Unit-A) radiance measurements. By carefully removing the instrument/measurement noise, the algorithm can produce reliable GW variances with the minimum detectable value as small as 0.1 K2. Preliminary analyses with AMSU-A data show GW variance maps in the stratosphere have very similar distributions to those found with the UARS MLS (Upper Atmosphere Research Satellite Microwave Limb Sounder). However, the AMSU-A offers better horizontal and temporal resolution for observing regional GW variability, such as activity over sub-Antarctic islands.
Evans, Nick
2016-09-12
Essential facts Leading Change, Adding Value is NHS England's new nursing and midwifery framework. It is designed to build on Compassion in Practice (CiP), which was published 3 years ago and set out the 6Cs: compassion, care, commitment, courage, competence and communication. CiP established the values at the heart of nursing and midwifery, while the new framework sets out how staff can help transform the health and care sectors to meet the aims of the NHS England's Five Year Forward View. PMID:27615573
Some variance reduction methods for numerical stochastic homogenization.
Blanc, X; Le Bris, C; Legoll, F
2016-04-28
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. PMID:27002065
RISK ANALYSIS, ANALYSIS OF VARIANCE: GETTING MORE FROM OUR DATA
Technology Transfer Automated Retrieval System (TEKTRAN)
Analysis of variance (ANOVA) and regression are common statistical techniques used to analyze agronomic experimental data and determine significant differences among yields due to treatments or other experimental factors. Risk analysis provides an alternate and complimentary examination of the same...
40 CFR 142.42 - Consideration of a variance request.
Code of Federal Regulations, 2010 CFR
2010-07-01
... contaminant level required by the national primary drinking water regulations because of the nature of the raw... effectiveness of treatment methods for the contaminant for which the variance is requested. (2) Cost and...
40 CFR 141.4 - Variances and exemptions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... subparts H, P, S, T, W, and Y of this part. ... total coliforms and E. coli and variances from any of the treatment technique requirements of subpart H... Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER...
A multicomb variance reduction scheme for Monte Carlo semiconductor simulators
Gray, M.G.; Booth, T.E.; Kwan, T.J.T.; Snell, C.M.
1998-04-01
The authors adapt a multicomb variance reduction technique used in neutral particle transport to Monte Carlo microelectronic device modeling. They implement the method in a two-dimensional (2-D) MOSFET device simulator and demonstrate its effectiveness in the study of hot electron effects. The simulations show that the statistical variance of hot electrons is significantly reduced with minimal computational cost. The method is efficient, versatile, and easy to implement in existing device simulators.
Not Available
1985-06-01
Consafe is now using a computer-aided design and drafting system adapting its multipurpose support vessels (MSVS) to specific user requirements. The vessels are based on the concept of standard container modules adapted into living quarters, workshops, service units, offices with each application for a specific project demanding a unique mix. There is also the need for constant refurbishment program as service conditions take their toll on the modules. The computer-aided design system is described.
On variance estimate for covariate adjustment by propensity score analysis.
Zou, Baiming; Zou, Fei; Shuster, Jonathan J; Tighe, Patrick J; Koch, Gary G; Zhou, Haibo
2016-09-10
Propensity score (PS) methods have been used extensively to adjust for confounding factors in the statistical analysis of observational data in comparative effectiveness research. There are four major PS-based adjustment approaches: PS matching, PS stratification, covariate adjustment by PS, and PS-based inverse probability weighting. Though covariate adjustment by PS is one of the most frequently used PS-based methods in clinical research, the conventional variance estimation of the treatment effects estimate under covariate adjustment by PS is biased. As Stampf et al. have shown, this bias in variance estimation is likely to lead to invalid statistical inference and could result in erroneous public health conclusions (e.g., food and drug safety and adverse events surveillance). To address this issue, we propose a two-stage analytic procedure to develop a valid variance estimator for the covariate adjustment by PS analysis strategy. We also carry out a simple empirical bootstrap resampling scheme. Both proposed procedures are implemented in an R function for public use. Extensive simulation results demonstrate the bias in the conventional variance estimator and show that both proposed variance estimators offer valid estimates for the true variance, and they are robust to complex confounding structures. The proposed methods are illustrated for a post-surgery pain study. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26999553
Analytic variance estimates of Swank and Fano factors
Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank
2014-07-15
Purpose: Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. Methods: The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data from a Monte Carlo simulation of an indirect x-ray imaging detector. Results: The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. Conclusions: The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.
Variance estimation for systematic designs in spatial surveys.
Fewster, R M
2011-12-01
In spatial surveys for estimating the density of objects in a survey region, systematic designs will generally yield lower variance than random designs. However, estimating the systematic variance is well known to be a difficult problem. Existing methods tend to overestimate the variance, so although the variance is genuinely reduced, it is over-reported, and the gain from the more efficient design is lost. The current approaches to estimating a systematic variance for spatial surveys are to approximate the systematic design by a random design, or approximate it by a stratified design. Previous work has shown that approximation by a random design can perform very poorly, while approximation by a stratified design is an improvement but can still be severely biased in some situations. We develop a new estimator based on modeling the encounter process over space. The new "striplet" estimator has negligible bias and excellent precision in a wide range of simulation scenarios, including strip-sampling, distance-sampling, and quadrat-sampling surveys, and including populations that are highly trended or have strong aggregation of objects. We apply the new estimator to survey data for the spotted hyena (Crocuta crocuta) in the Serengeti National Park, Tanzania, and find that the reported coefficient of variation for estimated density is 20% using approximation by a random design, 17% using approximation by a stratified design, and 11% using the new striplet estimator. This large reduction in reported variance is verified by simulation. PMID:21534940
NASA Astrophysics Data System (ADS)
Fakhri, Hossein; Imaanpur, Ali
2003-03-01
In this article we construct the chirality and Dirac operators on noncommutative AdS2. We also derive the discrete spectrum of the Dirac operator which is important in the study of the spectral triple associated to AdS2. It is shown that the degeneracy of the spectrum present in the commutative AdS2 is lifted in the noncommutative case. The way we construct the chirality operator is suggestive of how to introduce the projector operators of the corresponding projective modules on this space.
NASA Astrophysics Data System (ADS)
Molina-Vilaplana, Javier; Sierra, Germán
2013-12-01
In this paper we formulate the xp model on the AdS2 spacetime. We find that the spectrum of the Hamiltonian has positive and negative eigenvalues, whose absolute values are given by a harmonic oscillator spectrum, which in turn coincides with that of a massive Dirac fermion in AdS2. We extend this result to generic xp models which are shown to be equivalent to a massive Dirac fermion on spacetimes whose metric depend of the xp Hamiltonian. Finally, we construct the generators of the isometry group SO(2,1) of the AdS2 spacetime, and discuss the relation with conformal quantum mechanics.
Increasing Genetic Variance of Body Mass Index during the Swedish Obesity Epidemic
Rokholm, Benjamin; Silventoinen, Karri; Tynelius, Per; Gamborg, Michael; Sørensen, Thorkild I. A.; Rasmussen, Finn
2011-01-01
Background and Objectives There is no doubt that the dramatic worldwide increase in obesity prevalence is due to changes in environmental factors. However, twin and family studies suggest that genetic differences are responsible for the major part of the variation in adiposity within populations. Recent studies show that the genetic effects on body mass index (BMI) may be stronger when combined with presumed risk factors for obesity. We tested the hypothesis that the genetic variance of BMI has increased during the obesity epidemic. Methods The data comprised height and weight measurements of 1,474,065 Swedish conscripts at age 18–19 y born between 1951 and 1983. The data were linked to the Swedish Multi-Generation Register and the Swedish Twin Register from which 264,796 full-brother pairs, 1,736 monozygotic (MZ) and 1,961 dizygotic (DZ) twin pairs were identified. The twin pairs were analysed to identify the most parsimonious model for the genetic and environmental contribution to BMI variance. The full-brother pairs were subsequently divided into subgroups by year of birth to investigate trends in the genetic variance of BMI. Results The twin analysis showed that BMI variation could be explained by additive genetic and environmental factors not shared by co-twins. On the basis of the analyses of the full-siblings, the additive genetic variance of BMI increased from 4.3 [95% CI 4.04–4.53] to 7.9 [95% CI 7.28–8.54] within the study period, as did the unique environmental variance, which increased from 1.4 [95% CI 1.32–1.48] to 2.0 [95% CI 1.89–2.22]. The BMI heritability increased from 75% to 78.8%. Conclusion The results confirm the hypothesis that the additive genetic variance of BMI has increased strongly during the obesity epidemic. This suggests that the obesogenic environment has enhanced the influence of adiposity related genes. PMID:22087252
NASA Astrophysics Data System (ADS)
Kikuchi, Kenji
2010-06-01
Accelerator driven nuclear transmutation system has been pursued to have a clue to the solution of high-level radioactive waste management. The concept consists of super conducting linac, sub-critical reactor and the beam window. Reference model is set up to 800MW thermal power by using 1.5GeV proton beams with considerations multi-factors such as core criticality. Materials damage is simulated by high-energy particle transport codes and so on. Recent achievement on irradiation materials experiment is stated and the differences are pointed out if core burn-up is considered or not. Heat balance in tank-type ADS indicates the temperature conditions of steam generator, the beam widow and cladding materials. Lead-bismuth eutectics demonstration has been conducted. Corrosion depth rate was shown by experiments.
Supersymmetric warped AdS in extended topologically massive supergravity
NASA Astrophysics Data System (ADS)
Deger, N. S.; Kaya, A.; Samtleben, H.; Sezgin, E.
2014-07-01
We determine the most general form of off-shell N=(1,1) supergravity field configurations in three dimensions by requiring that at least one off-shell Killing spinor exists. We then impose the field equations of the topologically massive off-shell supergravity and find a class of solutions whose properties crucially depend on the norm of the auxiliary vector field. These are spacelike-squashed and timelike-stretched AdS3 for the spacelike and timelike norms, respectively. At the transition point where the norm vanishes, the solution is null warped AdS3. This occurs when the coefficient of the Lorentz-Chern-Simons term is related to the AdS radius by μℓ=2. We find that the spacelike-squashed AdS3 can be modded out by a suitable discrete subgroup of the isometry group, yielding an extremal black hole solution which avoids closed timelike curves.
Detecting Pulsars with Interstellar Scintillation in Variance Images
NASA Astrophysics Data System (ADS)
Dai, S.; Johnston, S.; Bell, M. E.; Coles, W. A.; Hobbs, G.; Ekers, R. D.; Lenc, E.
2016-08-01
Pulsars are the only cosmic radio sources known to be sufficiently compact to show diffractive interstellar scintillations. Images of the variance of radio signals in both time and frequency can be used to detect pulsars in large-scale continuum surveys using the next generation of synthesis radio telescopes. This technique allows a search over the full field of view while avoiding the need for expensive pixel-by-pixel high time resolution searches. We investigate the sensitivity of detecting pulsars in variance images. We show that variance images are most sensitive to pulsars whose scintillation time-scales and bandwidths are close to the subintegration time and channel bandwidth. Therefore, in order to maximise the detection of pulsars for a given radio continuum survey, it is essential to retain a high time and frequency resolution, allowing us to make variance images sensitive to pulsars with different scintillation properties. We demonstrate the technique with Murchision Widefield Array data and show that variance images can indeed lead to the detection of pulsars by distinguishing them from other radio sources.
Application of variance components estimation to calibrate geoid error models.
Guo, Dong-Mei; Xu, Hou-Ze
2015-01-01
The method of using Global Positioning System-leveling data to obtain orthometric heights has been well studied. A simple formulation for the weighted least squares problem has been presented in an earlier work. This formulation allows one directly employing the errors-in-variables models which completely descript the covariance matrices of the observables. However, an important question that what accuracy level can be achieved has not yet to be satisfactorily solved by this traditional formulation. One of the main reasons for this is the incorrectness of the stochastic models in the adjustment, which in turn allows improving the stochastic models of measurement noises. Therefore the issue of determining the stochastic modeling of observables in the combined adjustment with heterogeneous height types will be a main focus point in this paper. Firstly, the well-known method of variance component estimation is employed to calibrate the errors of heterogeneous height data in a combined least square adjustment of ellipsoidal, orthometric and gravimetric geoid. Specifically, the iterative algorithms of minimum norm quadratic unbiased estimation are used to estimate the variance components for each of heterogeneous observations. Secondly, two different statistical models are presented to illustrate the theory. The first method directly uses the errors-in-variables as a priori covariance matrices and the second method analyzes the biases of variance components and then proposes bias-corrected variance component estimators. Several numerical test results show the capability and effectiveness of the variance components estimation procedure in combined adjustment for calibrating geoid error model. PMID:26306296
Increased spatial variance accompanies reorganization of two continental shelf ecosystems.
Litzow, Michael A; Urban, J Daniel; Laurel, Benjamin J
2008-09-01
Phase transitions between alternate stable states in marine ecosystems lead to disruptive changes in ecosystem services, especially fisheries productivity. We used trawl survey data spanning phase transitions in the North Pacific (Gulf of Alaska) and the North Atlantic (Scotian Shelf) to test for increases in ecosystem variability that might provide early warning of such transitions. In both time series, elevated spatial variability in a measure of community composition (ratio of cod [Gadus sp.] abundance to prey abundance) accompanied transitions between ecosystem states, and variability was negatively correlated with distance from the ecosystem transition point. In the Gulf of Alaska, where the phase transition was apparently the result of a sudden perturbation (climate regime shift), variance increased one year before the transition in mean state occurred. On the Scotian Shelf, where ecosystem reorganization was the result of persistent overfishing, a significant increase in variance occurred three years before the transition in mean state was detected. However, we could not reject the alternate explanation that increased variance may also have simply been inherent to the final stable state in that ecosystem. Increased variance has been previously observed around transition points in models, but rarely in real ecosystems, and our results demonstrate the possible management value in tracking the variance of key parameters in exploited ecosystems. PMID:18767612
Analysis of Variance Components for Genetic Markers with Unphased Genotypes
Wang, Tao
2016-01-01
An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions. PMID:27468297
Planar AdS black holes in Lovelock gravity with a nonminimal scalar field
NASA Astrophysics Data System (ADS)
Gaete, Moisés Bravo; Hassaïne, Mokhtar
2013-11-01
In arbitrary dimension D, we consider a self-interacting scalar field nonminimally coupled with a gravity theory given by a particular Lovelock action indexed by an integer k. To be more precise, the coefficients appearing in the Lovelock expansion are fixed by requiring the theory to have a unique AdS vacuum with a fixed value of the cosmological constant. This yields to k = 1, 2,⋯, inequivalent possible gravity theories; here the case k = 1 corresponds to the standard Einstein-Hilbert Lagrangian. For each par ( D, k), we derive two classes of AdS black hole solutions with planar event horizon topology for particular values of the nonminimal coupling parameter. The first family of solutions depends on a unique constant and is valid only for k ≥ 2. In fact, its GR counterpart k = 1 reduces to the pure AdS metric with a vanishing scalar field. The second family of solutions involves two independent constants and corresponds to a stealth black hole configuration; that is a nontrivial scalar field together with a black hole metric such that both side of the Einstein equations (gravity and matter parts) vanishes identically. In this case, the standard GR case k = 1 reduces to the Schwarzschild-AdS-Tangherlini black hole metric with a trivial scalar field. We show that the two-parametric stealth solution defined in D dimension can be promoted to the uniparametric black hole solution in ( D + 1) dimension by fixing one of the two constants in term of the other and by adding a transversal coordinate. In both cases, the existence of these solutions is strongly inherent of the presence of the higher order curvature terms k ≥ 2 of the Lovelock gravity. We also establish that these solutions emerge from a stealth configuration defined on the pure AdS metric through a Kerr-Schild transformation. Finally, in the last part, we include multiple exact ( D - 1) - forms homogenously distributed and coupled to the scalar field. For a specific coupling, we obtain black hole
Value Added in English Schools
ERIC Educational Resources Information Center
Ray, Andrew; McCormack, Tanya; Evans, Helen
2009-01-01
Value-added indicators are now a central part of school accountability in England, and value-added information is routinely used in school improvement at both the national and the local levels. This article describes the value-added models that are being used in the academic year 2007-8 by schools, parents, school inspectors, and other…
Hartung, Thomas
2009-12-01
Taking the 110th anniversary of marketing of aspirin as starting point, the almost scary toxicological profile of aspirin is contrasted with its actual use experience. The author concludes that we are lucky that, in 1899, there was no regulatory toxicology. Adding, for the purpose of this article, a fourth R to the Three Rs, i.e. Realism, three reality-checks are carried out. The first one comes to the conclusion that the tools of toxicology are hardly adequate for the challenges ahead. The second one concludes that, specifically, the implementation of the EU REACH system is not feasible with these tools, mainly with regard to throughput. The third one challenges the belief that classical alternative methods, i.e. replacing animal test-based tools one by one, is actually leading to a new toxicology - it appears to change only patches of the patchwork, but not to overcome any inherent limitations other than ethical ones. The perspective lies in the Toxicology for the 21st Century initiatives, which aim to create a new approach from the scratch, by an evidence-based toxicology and a global "Human Toxicology Programme". PMID:20105011
Technology Transfer Automated Retrieval System (TEKTRAN)
Breeders select superior genotypes despite the environment affecting phenotypic variance. Minimal variance of genotype means facilitates the statistical identification of superior genotypes. The variance components calculated from three datasets describing tuber composition and fried chip color were...
The probabilities of unique events.
Khemlani, Sangeet S; Lotstein, Max; Johnson-Laird, Phil
2012-01-01
Many theorists argue that the probabilities of unique events, even real possibilities such as President Obama's re-election, are meaningless. As a consequence, psychologists have seldom investigated them. We propose a new theory (implemented in a computer program) in which such estimates depend on an intuitive non-numerical system capable only of simple procedures, and a deliberative system that maps intuitions into numbers. The theory predicts that estimates of the probabilities of conjunctions should often tend to split the difference between the probabilities of the two conjuncts. We report two experiments showing that individuals commit such violations of the probability calculus, and corroborating other predictions of the theory, e.g., individuals err in the same way even when they make non-numerical verbal estimates, such as that an event is highly improbable. PMID:23056224
The Probabilities of Unique Events
Khemlani, Sangeet S.; Lotstein, Max; Johnson-Laird, Phil
2012-01-01
Many theorists argue that the probabilities of unique events, even real possibilities such as President Obama's re-election, are meaningless. As a consequence, psychologists have seldom investigated them. We propose a new theory (implemented in a computer program) in which such estimates depend on an intuitive non-numerical system capable only of simple procedures, and a deliberative system that maps intuitions into numbers. The theory predicts that estimates of the probabilities of conjunctions should often tend to split the difference between the probabilities of the two conjuncts. We report two experiments showing that individuals commit such violations of the probability calculus, and corroborating other predictions of the theory, e.g., individuals err in the same way even when they make non-numerical verbal estimates, such as that an event is highly improbable. PMID:23056224
Saturation of number variance in embedded random-matrix ensembles
NASA Astrophysics Data System (ADS)
Prakash, Ravi; Pandey, Akhilesh
2016-05-01
We study fluctuation properties of embedded random matrix ensembles of noninteracting particles. For ensemble of two noninteracting particle systems, we find that unlike the spectra of classical random matrices, correlation functions are nonstationary. In the locally stationary region of spectra, we study the number variance and the spacing distributions. The spacing distributions follow the Poisson statistics, which is a key behavior of uncorrelated spectra. The number variance varies linearly as in the Poisson case for short correlation lengths but a kind of regularization occurs for large correlation lengths, and the number variance approaches saturation values. These results are known in the study of integrable systems but are being demonstrated for the first time in random matrix theory. We conjecture that the interacting particle cases, which exhibit the characteristics of classical random matrices for short correlation lengths, will also show saturation effects for large correlation lengths.
Monte Carlo variance reduction approaches for non-Boltzmann tallies
Booth, T.E.
1992-12-01
Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed.
Impact of Damping Uncertainty on SEA Model Response Variance
NASA Technical Reports Server (NTRS)
Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand
2010-01-01
Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.
Minimum variance lower bound estimation and realization for desired structures.
Alipouri, Yousef; Poshtan, Javad
2014-05-01
The Minimum Variance Lower Bound (MVLB) represents the best achievable controller capability in a variance sense. Estimation and realization of MVLB for nonlinear systems confront some difficulties. Hence, almost all methods introduced so far estimate MVLB for a certain structure (e.g., NARMAX) or controller (e.g. PID). In this paper, MVLB for desired structures (not restricted to a certain type) is studied. The situation when the model is not in hand, is not accurate, or is not invertible has been considered. Moreover, in order to realize minimum variance controllers for nonlinear structures, a recursive model-free MVC design is utilized. Finally, a simulation study has been used to clarify the effectiveness of the proposed control scheme. PMID:24642244
The mean and variance of phylogenetic diversity under rarefaction.
Nipperess, David A; Matsen, Frederick A
2013-06-01
Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required. PMID:23833701
ERIC Educational Resources Information Center
Starns, Jeffrey J.; Rotello, Caren M.; Hautus, Michael J.
2014-01-01
We tested the dual process and unequal variance signal detection models by jointly modeling recognition and source confidence ratings. The 2 approaches make unique predictions for the slope of the recognition memory zROC function for items with correct versus incorrect source decisions. The standard bivariate Gaussian version of the unequal…
Supergravity at the boundary of AdS supergravity
NASA Astrophysics Data System (ADS)
Amsel, Aaron J.; Compère, Geoffrey
2009-04-01
We give a general analysis of AdS boundary conditions for spin-3/2 Rarita-Schwinger fields and investigate boundary conditions preserving supersymmetry for a graviton multiplet in AdS4. Linear Rarita-Schwinger fields in AdSd are shown to admit mixed Dirichlet-Neumann boundary conditions when their mass is in the range 0≤|m|<1/2lAdS. We also demonstrate that mixed boundary conditions are allowed for larger masses when the inner product is “renormalized” accordingly with the action. We then use the results obtained for |m|=1/lAdS to explore supersymmetric boundary conditions for N=1 AdS4 supergravity in which the metric and Rarita-Schwinger fields are fluctuating at the boundary. We classify boundary conditions that preserve boundary supersymmetry or superconformal symmetry. Under the AdS/CFT dictionary, Neumann boundary conditions in d=4 supergravity correspond to gauging the superconformal group of the three-dimensional CFT describing M2-branes, while N=1 supersymmetric mixed boundary conditions couple the CFT to N=1 superconformal topologically massive gravity.
The dynamic Allan Variance IV: characterization of atomic clock anomalies.
Galleani, Lorenzo; Tavella, Patrizia
2015-05-01
The number of applications where precise clocks play a key role is steadily increasing, satellite navigation being the main example. Precise clock anomalies are hence critical events, and their characterization is a fundamental problem. When an anomaly occurs, the clock stability changes with time, and this variation can be characterized with the dynamic Allan variance (DAVAR). We obtain the DAVAR for a series of common clock anomalies, namely, a sinusoidal term, a phase jump, a frequency jump, and a sudden change in the clock noise variance. These anomalies are particularly common in space clocks. Our analytic results clarify how the clock stability changes during these anomalies. PMID:25965674
Sensor/Actuator Selection for the Constrained Variance Control Problem
NASA Technical Reports Server (NTRS)
Delorenzo, M. L.; Skelton, R. E.
1985-01-01
The problem of designing a linear controller for systems subject to inequality variance constraints is considered. A quadratic penalty function approach is used to yield a linear controller. Both the weights in the quadratic penalty function and the locations of sensors and actuators are selected by successive approximations to obtain an optimal design which satisfies the input/output variance constraints. The method is applied to NASA's 64 meter Hoop-Column Space Antenna for satellite communications. In addition the solution for the control law, the main feature of these results is the systematic determination of actuator design requirements which allow the given input/output performance constraints to be satisfied.
Variance in trace constituents following the final stratospheric warming
NASA Technical Reports Server (NTRS)
Hess, Peter
1990-01-01
Concentration variations with time in trace stratospheric constituents N2O, CF2Cl2, CFCl3, and CH4 were investigated using samples collected aboard balloons flown over southern France during the summer months of 1977-1979. Data are analyzed using a tracer transport model, and the mechanisms behind the modeled tracer variance are examined. An analysis of the N2O profiles for the month of June showed that a large fraction of the variance reported by Ehhalt et al. (1983) is on an interannual time scale.
A multi-variance analysis in the time domain
NASA Technical Reports Server (NTRS)
Walter, Todd
1993-01-01
Recently a new technique for characterizing the noise processes affecting oscillators was introduced. This technique minimizes the difference between the estimates of several different variances and their values as predicted by the standard power law model of noise. The method outlined makes two significant advancements: it uses exclusively time domain variances so that deterministic parameters such as linear frequency drift may be estimated, and it correctly fits the estimates using the chi-square distribution. These changes permit a more accurate fitting at long time intervals where there is the least information. This technique was applied to both simulated and real data with excellent results.
Signal Variance in Gamma Ray Detectors - A Review
Devanathan, Ram; Corrales, Louis R.; Gao, Fei; Weber, William J.
2006-09-06
Signal variance in gamma ray detector materials is reviewed with an emphasis on intrinsic variance. Phenomenological models of electron cascades are examined and the Fano factor (F) is discussed in detail. In semiconductors F is much smaller than unity and charge carrier production is nearly proportional to energy. Based on a fit to a number of semiconductors and insulators, a new relationship between the average energy for electron-hole pair production and band-gap energy is proposed. In scintillators, the resolution is governed mainly by photoelectron statistics and proportionality of light yield with respect to energy.
Heterogeneity of variances for carcass traits by percentage Brahman inheritance.
Crews, D H; Franke, D E
1998-07-01
Heterogeneity of carcass trait variances due to level of Brahman inheritance was investigated using records from straightbred and crossbred steers produced from 1970 to 1988 (n = 1,530). Angus, Brahman, Charolais, and Hereford sires were mated to straightbred and crossbred cows to produce straightbred, F1, back-cross, three-breed cross, and two-, three-, and four-breed rotational crossbred steers in four non-overlapping generations. At weaning (mean age = 220 d), steers were randomly assigned within breed group directly to the feedlot for 200 d, or to a backgrounding and stocker phase before feeding. Stocker steers were fed from 70 to 100 d in generations 1 and 2 and from 60 to 120 d in generations 3 and 4. Carcass traits included hot carcass weight, subcutaneous fat thickness and longissimus muscle area at the 12-13th rib interface, carcass weight-adjusted longissimus muscle area, USDA yield grade, estimated total lean yield, marbling score, and Warner-Bratzler shear force. Steers were classified as either high Brahman (50 to 100% Brahman), moderate Brahman (25 to 49% Brahman), or low Brahman (0 to 24% Brahman) inheritance. Two types of animal models were fit with regard to level of Brahman inheritance. One model assumed similar variances between pairs of Brahman inheritance groups, and the second model assumed different variances between pairs of Brahman inheritance groups. Fixed sources of variation in both models included direct and maternal additive and nonadditive breed effects, year of birth, and slaughter age. Variances were estimated using derivative free REML procedures. Likelihood ratio tests were used to compare models. The model accounting for heterogeneous variances had a greater likelihood (P < .001) than the model assuming homogeneous variances for hot carcass weight, longissimus muscle area, weight-adjusted longissimus muscle area, total lean yield, and Warner-Bratzler shear force, indicating improved fit with percentage Brahman inheritance
Some Uniqueness Results for PARAFAC2.
ERIC Educational Resources Information Center
ten Berge, Jos M. F.; Kiers, Henk A. L.
1996-01-01
Some uniqueness properties are presented for the PARAFAC2 model for covariance matrices, focusing on uniqueness in the rank two case of PARAFAC2. PARAFAC2 is shown to be usually unique with four matrices, but not unique with three unless a certain additional assumption is introduced. (SLD)
Asymptotically AdS spacetimes with a timelike Kasner singularity
NASA Astrophysics Data System (ADS)
Ren, Jie
2016-07-01
Exact solutions to Einstein's equations for holographic models are presented and studied. The IR geometry has a timelike cousin of the Kasner singularity, which is the less generic case of the BKL (Belinski-Khalatnikov-Lifshitz) singularity, and the UV is asymptotically AdS. This solution describes a holographic RG flow between them. The solution's appearance is an interpolation between the planar AdS black hole and the AdS soliton. The causality constraint is always satisfied. The entanglement entropy and Wilson loops are discussed. The boundary condition for the current-current correlation function and the Laplacian in the IR is examined. There is no infalling wave in the IR, but instead, there is a normalizable solution in the IR. In a special case, a hyperscaling-violating geometry is obtained after a dimensional reduction.
Worldsheet scattering in AdS3/CFT2
NASA Astrophysics Data System (ADS)
Sundin, Per; Wulff, Linus
2013-07-01
We confront the recently proposed exact S-matrices for AdS 3/ CFT 2 with direct worldsheet calculations. Utilizing the BMN and Near Flat Space (NFS) expansions for strings on AdS 3 × S 3 × S 3 × S 1 and AdS 3 × S 3 × T 4 we compute both tree-level and one-loop scattering amplitudes. Up to some minor issues we find nice agreement in the tree-level sector. At the one-loop level however we find that certain non-zero tree-level processes, which are not visible in the exact solution, contribute, via the optical theorem, and give an apparent mismatch for certain amplitudes. Furthermore we find that a proposed one-loop modification of the dressing phase correctly reproduces the worldsheet calculation while the standard Hernandez-Lopez phase does not. We also compute several massless to massless processes.
Detailed ultraviolet asymptotics for AdS scalar field perturbations
NASA Astrophysics Data System (ADS)
Evnin, Oleg; Jai-akson, Puttarak
2016-04-01
We present a range of methods suitable for accurate evaluation of the leading asymptotics for integrals of products of Jacobi polynomials in limits when the degrees of some or all polynomials inside the integral become large. The structures in question have recently emerged in the context of effective descriptions of small amplitude perturbations in anti-de Sitter (AdS) spacetime. The limit of high degree polynomials corresponds in this situation to effective interactions involving extreme short-wavelength modes, whose dynamics is crucial for the turbulent instabilities that determine the ultimate fate of small AdS perturbations. We explicitly apply the relevant asymptotic techniques to the case of a self-interacting probe scalar field in AdS and extract a detailed form of the leading large degree behavior, including closed form analytic expressions for the numerical coefficients appearing in the asymptotics.
New massive gravity and AdS(4) counterterms.
Jatkar, Dileep P; Sinha, Aninda
2011-04-29
We show that the recently proposed Dirac-Born-Infeld extension of new massive gravity emerges naturally as a counterterm in four-dimensional anti-de Sitter space (AdS(4)). The resulting on-shell Euclidean action is independent of the cutoff at zero temperature. We also find that the same choice of counterterm gives the usual area law for the AdS(4) Schwarzschild black hole entropy in a cutoff-independent manner. The parameter values of the resulting counterterm action correspond to a c=0 theory in the context of the duality between AdS(3) gravity and two-dimensional conformal field theory. We rewrite this theory in terms of the gauge field that is used to recast 3D gravity as a Chern-Simons theory. PMID:21635026
CYP1B1: a unique gene with unique characteristics.
Faiq, Muneeb A; Dada, Rima; Sharma, Reetika; Saluja, Daman; Dada, Tanuj
2014-01-01
CYP1B1, a recently described dioxin inducible oxidoreductase, is a member of the cytochrome P450 superfamily involved in the metabolism of estradiol, retinol, benzo[a]pyrene, tamoxifen, melatonin, sterols etc. It plays important roles in numerous physiological processes and is expressed at mRNA level in many tissues and anatomical compartments. CYP1B1 has been implicated in scores of disorders. Analyses of the recent studies suggest that CYP1B1 can serve as a universal/ideal cancer marker and a candidate gene for predictive diagnosis. There is plethora of literature available about certain aspects of CYP1B1 that have not been interpreted, discussed and philosophized upon. The present analysis examines CYP1B1 as a peculiar gene with certain distinctive characteristics like the uniqueness in its chromosomal location, gene structure and organization, involvement in developmentally important disorders, tissue specific, not only expression, but splicing, potential as a universal cancer marker due to its involvement in key aspects of cellular metabolism, use in diagnosis and predictive diagnosis of various diseases and the importance and function of CYP1B1 mRNA in addition to the regular translation. Also CYP1B1 is very difficult to express in heterologous expression systems, thereby, halting its functional studies. Here we review and analyze these exceptional and startling characteristics of CYP1B1 with inputs from our own experiences in order to get a better insight into its molecular biology in health and disease. This may help to further understand the etiopathomechanistic aspects of CYP1B1 mediated diseases paving way for better research strategies and improved clinical management. PMID:25658124
Phases of global AdS black holes
NASA Astrophysics Data System (ADS)
Basu, Pallab; Krishnan, Chethan; Subramanian, P. N. Bala
2016-06-01
We study the phases of gravity coupled to a charged scalar and gauge field in an asymptotically Anti-de Sitter spacetime ( AdS 4) in the grand canonical ensemble. For the conformally coupled scalar, an intricate phase diagram is charted out between the four relevant solutions: global AdS, boson star, Reissner-Nordstrom black hole and the hairy black hole. The nature of the phase diagram undergoes qualitative changes as the charge of the scalar is changed, which we discuss. We also discuss the new features that arise in the extremal limit.
Respiratory infections unique to Asia.
Tsang, Kenneth W; File, Thomas M
2008-11-01
Asia is a highly heterogeneous region with vastly different cultures, social constitutions and populations affected by a wide spectrum of respiratory diseases caused by tropical pathogens. Asian patients with community-acquired pneumonia differ from their Western counterparts in microbiological aetiology, in particular the prominence of Gram-negative organisms, Mycobacterium tuberculosis, Burkholderia pseudomallei and Staphylococcus aureus. In addition, the differences in socioeconomic and health-care infrastructures limit the usefulness of Western management guidelines for pneumonia in Asia. The importance of emerging infectious diseases such as severe acute respiratory syndrome and avian influenza infection remain as close concerns for practising respirologists in Asia. Specific infections such as melioidosis, dengue haemorrhagic fever, scrub typhus, leptospirosis, salmonellosis, penicilliosis marneffei, malaria, amoebiasis, paragonimiasis, strongyloidiasis, gnathostomiasis, trinchinellosis, schistosomiasis and echinococcosis occur commonly in Asia and manifest with a prominent respiratory component. Pulmonary eosinophilia, endemic in parts of Asia, could occur with a wide range of tropical infections. Tropical eosinophilia is believed to be a hyper-sensitivity reaction to degenerating microfilariae trapped in the lungs. This article attempts to address the key respiratory issues in these respiratory infections unique to Asia and highlight the important diagnostic and management issues faced by practising respirologists. PMID:18945321
Genetic Variance in the SES-IQ Correlation.
ERIC Educational Resources Information Center
Eckland, Bruce K.
1979-01-01
Discusses questions dealing with genetic aspects of the correlation between IQ and socioeconomic status (SES). Questions include: How does assortative mating affect the genetic variance of IQ? Is the relationship between an individual's IQ and adult SES a causal one? And how can IQ research improve schools and schooling? (Author/DB)
Comparison of Turbulent Thermal Diffusivity and Scalar Variance Models
NASA Technical Reports Server (NTRS)
Yoder, Dennis A.
2016-01-01
In this study, several variable turbulent Prandtl number formulations are examined for boundary layers, pipe flow, and axisymmetric jets. The model formulations include simple algebraic relations between the thermal diffusivity and turbulent viscosity as well as more complex models that solve transport equations for the thermal variance and its dissipation rate. Results are compared with available data for wall heat transfer and profile measurements of mean temperature, the root-mean-square (RMS) fluctuating temperature, turbulent heat flux and turbulent Prandtl number. For wall-bounded problems, the algebraic models are found to best predict the rise in turbulent Prandtl number near the wall as well as the log-layer temperature profile, while the thermal variance models provide a good representation of the RMS temperature fluctuations. In jet flows, the algebraic models provide no benefit over a constant turbulent Prandtl number approach. Application of the thermal variance models finds that some significantly overpredict the temperature variance in the plume and most underpredict the thermal growth rate of the jet. The models yield very similar fluctuating temperature intensities in jets from straight pipes and smooth contraction nozzles, in contrast to data that indicate the latter should have noticeably higher values. For the particular low subsonic heated jet cases examined, changes in the turbulent Prandtl number had no effect on the centerline velocity decay.
Module organization and variance in protein-protein interaction networks
Lin, Chun-Yu; Lee, Tsai-Ling; Chiu, Yi-Yuan; Lin, Yi-Wei; Lo, Yu-Shu; Lin, Chih-Ta; Yang, Jinn-Moon
2015-01-01
A module is a group of closely related proteins that act in concert to perform specific biological functions through protein–protein interactions (PPIs) that occur in time and space. However, the underlying module organization and variance remain unclear. In this study, we collected module templates to infer respective module families, including 58,041 homologous modules in 1,678 species, and PPI families using searches of complete genomic database. We then derived PPI evolution scores and interface evolution scores to describe the module elements, including core and ring components. Functions of core components were highly correlated with those of essential genes. In comparison with ring components, core proteins/PPIs were conserved across multiple species. Subsequently, protein/module variance of PPI networks confirmed that core components form dynamic network hubs and play key roles in various biological functions. Based on the analyses of gene essentiality, module variance, and gene co-expression, we summarize the observations of module organization and variance as follows: 1) a module consists of core and ring components; 2) core components perform major biological functions and collaborate with ring components to execute certain functions in some cases; 3) core components are more conserved and essential during organizational changes in different biological states or conditions. PMID:25797237
Explaining Common Variance Shared by Early Numeracy and Literacy
ERIC Educational Resources Information Center
Davidse, N. J.; De Jong, M. T.; Bus, A. G.
2014-01-01
How can it be explained that early literacy and numeracy share variance? We specifically tested whether the correlation between four early literacy skills (rhyming, letter knowledge, emergent writing, and orthographic knowledge) and simple sums (non-symbolic and story condition) reduced after taking into account preschool attention control,…
Intuitive Analysis of Variance-- A Formative Assessment Approach
ERIC Educational Resources Information Center
Trumpower, David
2013-01-01
This article describes an assessment activity that can show students how much they intuitively understand about statistics, but also alert them to common misunderstandings. How the activity can be used formatively to help improve students' conceptual understanding of analysis of variance is discussed. (Contains 1 figure and 1 table.)
Automated variance reduction for Monte Carlo shielding analyses with MCNP
NASA Astrophysics Data System (ADS)
Radulescu, Georgeta
Variance reduction techniques are employed in Monte Carlo analyses to increase the number of particles in the space phase of interest and thereby lower the variance of statistical estimation. Variance reduction parameters are required to perform Monte Carlo calculations. It is well known that adjoint solutions, even approximate ones, are excellent biasing functions that can significantly increase the efficiency of a Monte Carlo calculation. In this study, an automated method of generating Monte Carlo variance reduction parameters, and of implementing the source energy biasing and the weight window technique in MCNP shielding calculations has been developed. The method is based on the approach used in the SAS4 module of the SCALE code system, which derives the biasing parameters from an adjoint one-dimensional Discrete Ordinates calculation. Unlike SAS4 that determines the radial and axial dose rates of a spent fuel cask in separate calculations, the present method provides energy and spatial biasing parameters for the entire system that optimize the simulation of particle transport towards all external surfaces of a spent fuel cask. The energy and spatial biasing parameters are synthesized from the adjoint fluxes of three one-dimensional Discrete Ordinates adjoint calculations. Additionally, the present method accommodates multiple source regions, such as the photon sources in light-water reactor spent nuclear fuel assemblies, in one calculation. With this automated method, detailed and accurate dose rate maps for photons, neutrons, and secondary photons outside spent fuel casks or other containers can be efficiently determined with minimal efforts.
Unbiased Estimates of Variance Components with Bootstrap Procedures
ERIC Educational Resources Information Center
Brennan, Robert L.
2007-01-01
This article provides general procedures for obtaining unbiased estimates of variance components for any random-model balanced design under any bootstrap sampling plan, with the focus on designs of the type typically used in generalizability theory. The results reported here are particularly helpful when the bootstrap is used to estimate standard…
Caution on the Use of Variance Ratios: A Comment.
ERIC Educational Resources Information Center
Shaffer, Juliet Popper
1992-01-01
Several metanalytic studies of group variability use variance ratios as measures of effect size. Problems with this approach are discussed, including limitations of using means and medians of ratios. Mean logarithms and the geometric mean are not adversely affected by the arbitrary choice of numerator. (SLD)
Variance-based uncertainty relations for incompatible observables
NASA Astrophysics Data System (ADS)
Chen, Bin; Cao, Ning-Ping; Fei, Shao-Ming; Long, Gui-Lu
2016-06-01
We formulate uncertainty relations for arbitrary finite number of incompatible observables. Based on the sum of variances of the observables, both Heisenberg-type and Schrödinger-type uncertainty relations are provided. These new lower bounds are stronger in most of the cases than the ones derived from some existing inequalities. Detailed examples are presented.
40 CFR 190.11 - Variances for unusual operations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Variances for unusual operations. 190.11 Section 190.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS...
Strength of Relationship in Multivariate Analysis of Variance.
ERIC Educational Resources Information Center
Smith, I. Leon
Methods for the calculation of eta coefficient, or correlation ratio, squared have recently been presented for examining the strength of relationship in univariate analysis of variance. This paper extends them to the multivariate case in which the effects of independent variables may be examined in relation to two or more dependent variables, and…
29 CFR 1904.38 - Variances from the recordkeeping rule.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 29 Labor 5 2011-07-01 2011-07-01 false Variances from the recordkeeping rule. 1904.38 Section 1904.38 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Other OSHA Injury and...
29 CFR 1904.38 - Variances from the recordkeeping rule.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 29 Labor 5 2013-07-01 2013-07-01 false Variances from the recordkeeping rule. 1904.38 Section 1904.38 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Other OSHA Injury and...
29 CFR 1904.38 - Variances from the recordkeeping rule.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 29 Labor 5 2014-07-01 2014-07-01 false Variances from the recordkeeping rule. 1904.38 Section 1904.38 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Other OSHA Injury and...
29 CFR 1904.38 - Variances from the recordkeeping rule.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 29 Labor 5 2012-07-01 2012-07-01 false Variances from the recordkeeping rule. 1904.38 Section 1904.38 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Other OSHA Injury and...
The Variance of Intraclass Correlations in Three and Four Level
ERIC Educational Resources Information Center
Hedges, Larry V.; Hedberg, Eric C.; Kuyper, Arend M.
2012-01-01
Intraclass correlations are used to summarize the variance decomposition in popula- tions with multilevel hierarchical structure. There has recently been considerable interest in estimating intraclass correlations from surveys or designed experiments to provide design parameters for planning future large-scale randomized experiments. The large…
Analysis of Variance: What Is Your Statistical Software Actually Doing?
ERIC Educational Resources Information Center
Li, Jian; Lomax, Richard G.
2011-01-01
Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…
44 CFR 60.6 - Variances and exceptions.
Code of Federal Regulations, 2012 CFR
2012-10-01
... environmental document will be prepared, will be made in accordance with the procedures set out in 44 CFR part... HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program CRITERIA FOR LAND MANAGEMENT AND USE Requirements for Flood Plain Management Regulations § 60.6 Variances and exceptions....
76 FR 78698 - Proposed Revocation of Permanent Variances
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-19
... several conditions that served as an alternative means of compliance to the falling-object-protection and... specified by these variances. Therefore, OSHA believes the alternative means of compliance granted by the.... 651, 655) in 1971 (see 36 FR 7340). Paragraphs (a)(4) and (a)(5) of Sec. 1926.451 required...
Numbers Of Degrees Of Freedom Of Allan-Variance Estimators
NASA Technical Reports Server (NTRS)
Greenhall, Charles A.
1992-01-01
Report discusses formulas for estimation of Allan variances. Presents algorithms for closed-form approximations of numbers of degrees of freedom characterizing results obtained when various estimators applied to five power-law components of classical mathematical model of clock noise.
Partitioning the Variance in Scores on Classroom Environment Instruments
ERIC Educational Resources Information Center
Dorman, Jeffrey P.
2009-01-01
This paper reports the partitioning of variance in scale scores from the use of three classroom environment instruments. Data sets from the administration of the What Is Happening In this Class (WIHIC) to 4,146 students, the Questionnaire on Teacher Interaction (QTI) to 2,167 students and the Catholic School Classroom Environment Questionnaire…
40 CFR 142.43 - Disposition of a variance request.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) Compliance (including increments of progress) by the public water system with each contaminant level... control measures as the Administrator may require for each contaminant covered by the variance. (d) The... the Administrator. (f) The proposed schedule for implementation of additional interim control...
21 CFR 821.2 - Exemptions and variances.
Code of Federal Regulations, 2014 CFR
2014-04-01
... and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES MEDICAL DEVICE TRACKING REQUIREMENTS General Provisions § 821.2 Exemptions and variances. (a) A...(s) of the device; (2) The reasons that compliance with the tracking requirements of this part...
21 CFR 821.2 - Exemptions and variances.
Code of Federal Regulations, 2013 CFR
2013-04-01
... and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES MEDICAL DEVICE TRACKING REQUIREMENTS General Provisions § 821.2 Exemptions and variances. (a) A...(s) of the device; (2) The reasons that compliance with the tracking requirements of this part...
21 CFR 821.2 - Exemptions and variances.
Code of Federal Regulations, 2010 CFR
2010-04-01
... and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES MEDICAL DEVICE TRACKING REQUIREMENTS General Provisions § 821.2 Exemptions and variances. (a) A...(s) of the device; (2) The reasons that compliance with the tracking requirements of this part...
21 CFR 821.2 - Exemptions and variances.
Code of Federal Regulations, 2012 CFR
2012-04-01
... and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES MEDICAL DEVICE TRACKING REQUIREMENTS General Provisions § 821.2 Exemptions and variances. (a) A...(s) of the device; (2) The reasons that compliance with the tracking requirements of this part...
40 CFR 142.42 - Consideration of a variance request.
Code of Federal Regulations, 2011 CFR
2011-07-01
... water source, the Administrator shall consider such factors as the following: (1) The availability and... economic considerations such as implementing treatment, improving the quality of the source water or using an alternate source. (c) A variance may be issued to a public water system on the condition that...
How does variance in fertility change over the demographic transition?
Hruschka, Daniel J; Burger, Oskar
2016-04-19
Most work on the human fertility transition has focused on declines in mean fertility. However, understanding changes in thevarianceof reproductive outcomes can be equally important for evolutionary questions about the heritability of fertility, individual determinants of fertility and changing patterns of reproductive skew. Here, we document how variance in completed fertility among women (45-49 years) differs across 200 surveys in 72 low- to middle-income countries where fertility transitions are currently in progress at various stages. Nearly all (91%) of samples exhibit variance consistent with a Poisson process of fertility, which places systematic, and often severe, theoretical upper bounds on the proportion of variance that can be attributed to individual differences. In contrast to the pattern of total variance, these upper bounds increase from high- to mid-fertility samples, then decline again as samples move from mid to low fertility. Notably, the lowest fertility samples often deviate from a Poisson process. This suggests that as populations move to low fertility their reproduction shifts from a rate-based process to a focus on an ideal number of children. We discuss the implications of these findings for predicting completed fertility from individual-level variables. PMID:27022082
Dominance, Information, and Hierarchical Scaling of Variance Space.
ERIC Educational Resources Information Center
Ceurvorst, Robert W.; Krus, David J.
1979-01-01
A method for computation of dominance relations and for construction of their corresponding hierarchical structures is presented. The link between dominance and variance allows integration of the mathematical theory of information with least squares statistical procedures without recourse to logarithmic transformations of the data. (Author/CTM)
Temporal Relation Extraction in Outcome Variances of Clinical Pathways.
Yamashita, Takanori; Wakata, Yoshifumi; Hamai, Satoshi; Nakashima, Yasuharu; Iwamoto, Yukihide; Franagan, Brendan; Nakashima, Naoki; Hirokawa, Sachio
2015-01-01
Recently the clinical pathway has progressed with digitalization and the analysis of activity. There are many previous studies on the clinical pathway but not many feed directly into medical practice. We constructed a mind map system that applies the spanning tree. This system can visualize temporal relations in outcome variances, and indicate outcomes that affect long-term hospitalization. PMID:26262376
[ECoG classification based on wavelet variance].
Yan, Shiyu; Liu, Chong; Wang, Hong; Zhao, Haibin
2013-06-01
For a typical electrocorticogram (ECoG)-based brain-computer interface (BCI) system in which the subject's task is to imagine movements of either the left small finger or the tongue, we proposed a feature extraction algorithm using wavelet variance. Firstly the definition and significance of wavelet variance were brought out and taken as feature based on the discussion of wavelet transform. Six channels with most distinctive features were selected from 64 channels for analysis. Consequently the EEG data were decomposed using db4 wavelet. The wavelet coeffi-cient variances containing Mu rhythm and Beta rhythm were taken out as features based on ERD/ERS phenomenon. The features were classified linearly with an algorithm of cross validation. The results of off-line analysis showed that high classification accuracies of 90. 24% and 93. 77% for training and test data set were achieved, the wavelet vari-ance had characteristics of simplicity and effectiveness and it was suitable for feature extraction in BCI research. K PMID:23865300
Challenges and opportunities in variance component estimation for animal breeding
Technology Transfer Automated Retrieval System (TEKTRAN)
There have been many advances in variance component estimation (VCE), both in theory and in software, since Dr. Henderson introduced Henderson’s Methods 1, 2, and 3 in 1953. However, many challenges in modern animal breeding are not addressed adequately by current algorithms and software. Examples i...
Variance in Math Achievement Attributable to Visual Cognitive Constructs
ERIC Educational Resources Information Center
Oehlert, Jeremy J.
2012-01-01
Previous research has reported positive correlations between math achievement and the cognitive constructs of spatial visualization, working memory, and general intelligence; however, no single study has assessed variance in math achievement attributable to all three constructs, examined in combination. The current study fills this gap in the…
Estimation of variance components including competitive effects of Large White growing gilts.
Arango, J; Misztal, I; Tsuruta, S; Culbertson, M; Herring, W
2005-06-01
Records of on-test ADG of Large White gilts were analyzed to estimate variance components of direct and associative genetic effects. Models included the effects of contemporary group (farm-barn-batch), birth litter, pen group, and direct and associative additive genetic effects. The area of each pen was 14 m2. The additive genetic variance was a function of the number of competitors in a group, the additive relationships between the animal performing the record and its pen mates, and the additive relationships between pen mates. To partially account for differences in the number of pen mates, a covariable (qi = 1, 1/n, or 1/n(1/2)) was added to the associative genetic effect. There were 4,946 records from 2,409 litters and 362 pen groups. Pen group size ranged from 12 to 16 gilts. Analyses by REML converged very slowly. A grid search showed that the likelihood function was almost flat when the additive genetic associative effect was fitted. Estimates of direct and associative heritability were 0.15 and 0.03, respectively. Within the BLUPF90 family of programs, the mixed-model equations can be set up directly. For variance component estimation, simple programs (REMLF90 and GIBBSF90) worked without modifications, but more optimized programs did not. Estimates obtained using the three values of qi were similar. With the data structure available for this study and under an environment with relative low competition among animals, accurate estimation of associative genetic effects was not possible. Estimation of competitive effects with large pen size is difficult. The magnitude of competition effects may be larger in commercial populations, where housing is denser and food is limited. PMID:15890801
NASA Astrophysics Data System (ADS)
Turco, M.; Milelli, M.
2009-09-01
skill scores of two competitive forecast. It is important to underline that the conclusions refer to the analysis of the Piemonte operational alert system, so they cannot be directly taken as universally true. But we think that some of the main lessons that can be derived from this study could be useful for the meteorological community. In details, the main conclusions are the following: - despite the overall improvement in global scale and the fact that the resolution of the limited area models has increased considerably over recent years, the QPF produced by the meteorological models involved in this study has not improved enough to allow its direct use, that is, the subjective HQPF continues to offer the best performance; - in the forecast process, the step where humans have the largest added value with respect to mathematical models, is the communication. In fact the human characterisation and communication of the forecast uncertainty to end users cannot be replaced by any computer code; - eventually, although there is no novelty in this study, we would like to show that the correct application of appropriated statistical techniques permits a better definition and quantification of the errors and, mostly important, allows a correct (unbiased) communication between forecasters and decision makers.
Variance in the reproductive success of dominant male mountain gorillas.
Robbins, Andrew M; Gray, Maryke; Uwingeli, Prosper; Mburanumwe, Innocent; Kagoda, Edwin; Robbins, Martha M
2014-10-01
Using 30 years of demographic data from 15 groups, this study estimates how harem size, female fertility, and offspring survival may contribute to variance in the siring rates of dominant male mountain gorillas throughout the Virunga Volcano Region. As predicted for polygynous species, differences in harem size were the greatest source of variance in the siring rate, whereas differences in female fertility and offspring survival were relatively minor. Harem size was positively correlated with offspring survival, even after removing all known and suspected cases of infanticide, so the correlation does not seem to reflect differences in the ability of males to protect their offspring. Harem size was not significantly correlated with female fertility, which is consistent with the hypothesis that mountain gorillas have minimal feeding competition. Harem size, offspring survival, and siring rates were not significantly correlated with the proportion of dominant tenures that occurred in multimale groups versus one-male groups; even though infanticide is less likely when those tenures end in multimale groups than one-male groups. In contrast with the relatively small contribution of offspring survival to variance in the siring rates of this study, offspring survival is a major source of variance in the male reproductive success of western gorillas, which have greater predation risks and significantly higher rates of infanticide. If differences in offspring protection are less important among male mountain gorillas than western gorillas, then the relative importance of other factors may be greater for mountain gorillas. Thus, our study illustrates how variance in male reproductive success and its components can differ between closely related species. PMID:24818867
Gravity Wave Variances and Propagation Derived from AIRS Radiances
NASA Technical Reports Server (NTRS)
Gong, Jie; Wu, Dong L.; Eckermann, S. D.
2012-01-01
As the first gravity wave (GW) climatology study using nadir-viewing infrared sounders, 50 Atmospheric Infrared Sounder (AIRS) radiance channels are selected to estimate GW variances at pressure levels between 2-100 hPa. The GW variance for each scan in the cross-track direction is derived from radiance perturbations in the scan, independently of adjacent scans along the orbit. Since the scanning swaths are perpendicular to the satellite orbits, which are inclined meridionally at most latitudes, the zonal component of GW propagation can be inferred by differencing the variances derived between the westmost and the eastmost viewing angles. Consistent with previous GW studies using various satellite instruments, monthly mean AIRS variance shows large enhancements over meridionally oriented mountain ranges as well as some islands at winter hemisphere high latitudes. Enhanced wave activities are also found above tropical deep convective regions. GWs prefer to propagate westward above mountain ranges, and eastward above deep convection. AIRS 90 field-of-views (FOVs), ranging from +48 deg. to -48 deg. off nadir, can detect large-amplitude GWs with a phase velocity propagating preferentially at steep angles (e.g., those from orographic and convective sources). The annual cycle dominates the GW variances and the preferred propagation directions for all latitudes. Indication of a weak two-year variation in the tropics is found, which is presumably related to the Quasi-biennial oscillation (QBO). AIRS geometry makes its out-tracks capable of detecting GWs with vertical wavelengths substantially shorter than the thickness of instrument weighting functions. The novel discovery of AIRS capability of observing shallow inertia GWs will expand the potential of satellite GW remote sensing and provide further constraints on the GW drag parameterization schemes in the general circulation models (GCMs).
White matter morphometric changes uniquely predict children's reading acquisition.
Myers, Chelsea A; Vandermosten, Maaike; Farris, Emily A; Hancock, Roeland; Gimenez, Paul; Black, Jessica M; Casto, Brandi; Drahos, Miroslav; Tumber, Mandeep; Hendren, Robert L; Hulme, Charles; Hoeft, Fumiko
2014-10-01
This study examined whether variations in brain development between kindergarten and Grade 3 predicted individual differences in reading ability at Grade 3. Structural MRI measurements indicated that increases in the volume of two left temporo-parietal white matter clusters are unique predictors of reading outcomes above and beyond family history, socioeconomic status, and cognitive and preliteracy measures at baseline. Using diffusion MRI, we identified the left arcuate fasciculus and superior corona radiata as key fibers within the two clusters. Bias-free regression analyses using regions of interest from prior literature revealed that volume changes in temporo-parietal white matter, together with preliteracy measures, predicted 56% of the variance in reading outcomes. Our findings demonstrate the important contribution of developmental differences in areas of left dorsal white matter, often implicated in phonological processing, as a sensitive early biomarker for later reading abilities, and by extension, reading difficulties. PMID:25212581
NASA Technical Reports Server (NTRS)
Stothers, R. B.
1984-01-01
The possible cause of the densest and most persistent dry fog on record, which was observed in Europe and the Middle East during AD 536 and 537, is discussed. The fog's long duration toward the south and the high sulfuric acid signal detected in Greenland in ice cores dated around AD 540 support the theory that the fog was due to the explosion of the Rabaul volcano, the occurrence of which has been dated at about AD 540 by the radiocarbon method.
Are restrictive guidelines for added sugars science based?
Erickson, Jennifer; Slavin, Joanne
2015-01-01
Added sugar regulations and recommendations have been proposed by policy makers around the world. With no universal definition, limited access to added sugar values in food products and no analytical difference from intrinsic sugars, added sugar recommendations present a unique challenge. Average added sugar intake by American adults is approximately 13% of total energy intake, and recommendations have been made as low 5% of total energy intake. In addition to public health recommendations, the Food and Drug Administration has proposed the inclusion of added sugar data to the Nutrition and Supplemental Facts Panel. The adoption of such regulations would have implications for both consumers as well as the food industry. There are certainly advantages to including added sugar data to the Nutrition Facts Panel; however, consumer research does not consistently show the addition of this information to improve consumer knowledge. With excess calorie consumption resulting in weight gain and increased risk of obesity and obesity related co-morbidities, added sugar consumption should be minimized. However, there is currently no evidence stating that added sugar is more harmful than excess calories from any other food source. The addition of restrictive added sugar recommendations may not be the most effective intervention in the treatment and prevention of obesity and other health concerns. PMID:26652250
AdS Branes from Partial Breaking of Superconformal Symmetries
Ivanov, E.A.
2005-10-01
It is shown how the static-gauge world-volume superfield actions of diverse superbranes on the AdS{sub d+1} superbackgrounds can be systematically derived from nonlinear realizations of the appropriate AdS supersymmetries. The latter are treated as superconformal symmetries of flat Minkowski superspaces of the bosonic dimension d. Examples include the N = 1 AdS{sub 4} supermembrane, which is associated with the 1/2 partial breaking of the OSp(1|4) supersymmetry down to the N = 1, d = 3 Poincare supersymmetry, and the T-duality related L3-brane on AdS{sub 5} and scalar 3-brane on AdS{sub 5} x S{sup 1}, which are associated with two different patterns of 1/2 breaking of the SU(2, 2|1) supersymmetry. Another (closely related) topic is the AdS/CFT equivalence transformation. It maps the world-volume actions of the codimension-one AdS{sub d+1} (super)branes onto the actions of the appropriate Minkowski (super)conformal field theories in the dimension d.
The Effect of Summer on Value-Added Assessments of Teacher and School Performance
ERIC Educational Resources Information Center
Palardy, Gregory J.; Peng, Luyao
2015-01-01
This study examines the effects of including the summer period on value-added assessments (VAA) of teacher and school performance at the early grades. The results indicate that 40-62% of the variance in VAA estimates originates from the summer period, depending on the outcome (i.e., reading or math achievement gains). Furthermore, when summer is…
AdS5 backgrounds with 24 supersymmetries
NASA Astrophysics Data System (ADS)
Beck, S.; Gutowski, J.; Papadopoulos, G.
2016-06-01
We prove a non-existence theorem for smooth AdS 5 solutions with connected, compact without boundary internal space that preserve strictly 24 supersymmetries. In particular, we show that D = 11 supergravity does not admit such solutions, and that all such solutions of IIB supergravity are locally isometric to the AdS 5 × S 5 maximally supersymmetric background. Furthermore, we prove that (massive) IIA supergravity also does not admit such solutions, provided that the homogeneity conjecture for massive IIA supergravity is valid. In the context of AdS/CFT these results imply that if gravitational duals for strictly mathcal{N}=3 superconformal theories in 4-dimensions exist, they are either singular or their internal spaces are not compact.
Entanglement temperature and perturbed AdS3 geometry
NASA Astrophysics Data System (ADS)
Levine, G. C.; Caravan, B.
2016-06-01
Generalizing the first law of thermodynamics, the increase in entropy density δ S (x ) of a conformal field theory (CFT) is proportional to the increase in energy density, δ E (x ) , of a subsystem divided by a spatially dependent entanglement temperature, TE(x ) , a fixed parameter determined by the geometry of the subsystem, crossing over to thermodynamic temperature at high temperatures. In this paper we derive a generalization of the thermodynamic Clausius relation, showing that deformations of the CFT by marginal operators are associated with spatial temperature variations, δ TE(x ) , and spatial energy correlations play the role of specific heat. Using AdS/CFT duality we develop a relationship between a perturbation in the local entanglement temperature of the CFT and the perturbation of the bulk AdS metric. In two dimensions, we demonstrate a method through which direct diagonalizations of the boundary quantum theory may be used to construct geometric perturbations of AdS3 .
Teacher Effects, Value-Added Models, and Accountability
ERIC Educational Resources Information Center
Konstantopoulos, Spyros
2014-01-01
Background: In the last decade, the effects of teachers on student performance (typically manifested as state-wide standardized tests) have been re-examined using statistical models that are known as value-added models. These statistical models aim to compute the unique contribution of the teachers in promoting student achievement gains from grade…
The Misattribution of Summers in Teacher Value-Added
ERIC Educational Resources Information Center
Atteberry, Allison
2012-01-01
This paper investigates the extent to which spring-to-spring testing timelines bias teacher value-added as a result of conflating summer and school-year learning. Using a unique dataset that contains both fall and spring standardized test scores, the author examines the patterns in school-year versus summer learning. She estimates value-added…
Courtiol, Alexandre; Rickard, Ian J.; Lummaa, Virpi; Prentice, Andrew M.; Fulford, Anthony J.C.; Stearns, Stephen C.
2013-01-01
Summary Recent human history is marked by demographic transitions characterized by declines in mortality and fertility [1]. By influencing the variance in those fitness components, demographic transitions can affect selection on other traits [2]. Parallel to changes in selection triggered by demography per se, relationships between fitness and anthropometric traits are also expected to change due to modification of the environment. Here we explore for the first time these two main evolutionary consequences of demographic transitions using a unique data set containing survival, fertility, and anthropometric data for thousands of women in rural Gambia from 1956–2010 [3]. We show how the demographic transition influenced directional selection on height and body mass index (BMI). We observed a change in selection for both traits mediated by variation in fertility: selection initially favored short females with high BMI values but shifted across the demographic transition to favor tall females with low BMI values. We demonstrate that these differences resulted both from changes in fitness variance that shape the strength of selection and from shifts in selective pressures triggered by environmental changes. These results suggest that demographic and environmental trends encountered by current human populations worldwide are likely to modify, but not stop, natural selection in humans. PMID:23623548
The Milieu Intérieur study - an integrative approach for study of human immunological variance.
Thomas, Stéphanie; Rouilly, Vincent; Patin, Etienne; Alanio, Cécile; Dubois, Annick; Delval, Cécile; Marquier, Louis-Guillaume; Fauchoux, Nicolas; Sayegrih, Seloua; Vray, Muriel; Duffy, Darragh; Quintana-Murci, Lluis; Albert, Matthew L
2015-04-01
The Milieu Intérieur Consortium has established a 1000-person healthy population-based study (stratified according to sex and age), creating an unparalleled opportunity for assessing the determinants of human immunologic variance. Herein, we define the criteria utilized for participant enrollment, and highlight the key data that were collected for correlative studies. In this report, we analyzed biological correlates of sex, age, smoking-habits, metabolic score and CMV infection. We characterized and identified unique risk factors among healthy donors, as compared to studies that have focused on the general population or disease cohorts. Finally, we highlight sex-bias in the thresholds used for metabolic score determination and recommend a deeper examination of current guidelines. In sum, our clinical design, standardized sample collection strategies, and epidemiological data analyses have established the foundation for defining variability within human immune responses. PMID:25562703
49 CFR 350.345 - How does a State apply for additional variances from the FMCSRs?
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 5 2010-10-01 2010-10-01 false How does a State apply for additional variances... apply for additional variances from the FMCSRs? Any State may apply to the Administrator for a variance from the FMCSRs for intrastate commerce. The variance will be granted only if the State...
40 CFR 142.22 - Review of State variances, exemptions and schedules.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Review of State variances, exemptions... State-Issued Variances and Exemptions § 142.22 Review of State variances, exemptions and schedules. (a... regulations the Administrator shall complete a comprehensive review of the variances and exemptions...
29 CFR 4204.21 - Requests to PBGC for variances and exemptions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 9 2010-07-01 2010-07-01 false Requests to PBGC for variances and exemptions. 4204.21... WITHDRAWAL LIABILITY FOR MULTIEMPLOYER PLANS VARIANCES FOR SALE OF ASSETS Procedures for Individual and Class Variances or Exemptions § 4204.21 Requests to PBGC for variances and exemptions. (a) Filing of...
40 CFR 142.21 - State consideration of a variance or exemption request.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false State consideration of a variance or... State-Issued Variances and Exemptions § 142.21 State consideration of a variance or exemption request. A State with primary enforcement responsibility shall act on any variance or exemption request...
29 CFR 4204.11 - Variance of the bond/escrow and sale-contract requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 9 2010-07-01 2010-07-01 false Variance of the bond/escrow and sale-contract requirements... CORPORATION WITHDRAWAL LIABILITY FOR MULTIEMPLOYER PLANS VARIANCES FOR SALE OF ASSETS Variance of the Statutory Requirements § 4204.11 Variance of the bond/escrow and sale-contract requirements. (a)...
ADS on WWW: Doubling Yearly for Five Years
NASA Astrophysics Data System (ADS)
Kurtz, M. J.; Eichhorn, G.; Accomazzi, A.; Grant, C. S.; Murray, S. S.
1998-12-01
It is now five years since the NASA ADS Abstract Service became available on the World Wide Web, in late winter of 1994. Following the explosive growth of the service (when compared with the old propriatory network access system) in the early months of WWW service, ADS growth has settled to doubling yearly. Currently ADS users make 440,000 queries per month, and receive 8,000,000 bibliographic references and 70,000 full-text articles, as well as abstracts, citation histories, links to data, and links to other data centers. Of the 70,000 full-text articles accessed through ADS each month, already 30% are via pointers to the electronic journals. This number is certain to increase. It is difficult to determine the exact number of ADS users. We track usage by the number of unique ``cookies'' which access ADS, and by the number of unique IP addresses. There are difficulties with each technique. In addition many non-astronomers find ADS through portal sites like Yahoo, which skews the statistics. 10,000 unique cookies access the full-text articles each month, 17,000 make queries, and 30,000 visit the site. 91% of full-text users have cookies, but only 65% of site visitors. From another perspective the number of IP addresses from a single typical research site (STScI) which access the full-text data is within 5% of the number of unique cookies assiociated with full-text use from stsci.edu, and also within 5% of the number of AAS members listing an STScI address. The number of unique IP addresses from STScI which make any sort of query to ADS is 40% higher than this. Those who access the full-text average one article per day, those who make queries average two per day. We believe nearly all active astronomy researchers, as well as students and affiliated professionals use ADS on a regular basis.
Putka, Dan J; Hoffman, Brian J
2013-01-01
Though considerable research has evaluated the functioning of assessment center (AC) ratings, surprisingly little research has articulated and uniquely estimated the components of reliable and unreliable variance that underlie such ratings. The current study highlights limitations of existing research for estimating components of reliable and unreliable variance in AC ratings. It provides a comprehensive empirical decomposition of variance in AC ratings that: (a) explicitly accounts for assessee-, dimension-, exercise-, and assessor-related effects, (b) does so with 3 large sets of operational data from a multiyear AC program, and (c) avoids many analytic limitations and confounds that have plagued the AC literature to date. In doing so, results show that (a) the extant AC literature has masked the contribution of sizable, substantively meaningful sources of variance in AC ratings, (b) various forms of assessor bias largely appear trivial, and (c) there is far more systematic, nuanced variance present in AC ratings than previous research indicates. Furthermore, this study also illustrates how the composition of reliable and unreliable variance heavily depends on the level to which assessor ratings are aggregated (e.g., overall AC-level, dimension-level, exercise-level) and the generalizations one desires to make based on those ratings. The implications of this study for future AC research and practice are discussed. PMID:23244226
Lorentzian AdS geometries, wormholes, and holography
Arias, Raul E.; Silva, Guillermo A.; Botta Cantcheff, Marcelo
2011-03-15
We investigate the structure of two-point functions for the quantum field theory dual to an asymptotically Lorentzian Anti de Sitter (AdS) wormhole. The bulk geometry is a solution of five-dimensional second-order Einstein-Gauss-Bonnet gravity and causally connects two asymptotically AdS spacetimes. We revisit the Gubser-Klebanov-Polyakov-Witten prescription for computing two-point correlation functions for dual quantum field theories operators O in Lorentzian signature and we propose to express the bulk fields in terms of the independent boundary values {phi}{sub 0}{sup {+-}} at each of the two asymptotic AdS regions; along the way we exhibit how the ambiguity of normalizable modes in the bulk, related to initial and final states, show up in the computations. The independent boundary values are interpreted as sources for dual operators O{sup {+-}} and we argue that, apart from the possibility of entanglement, there exists a coupling between the degrees of freedom living at each boundary. The AdS{sub 1+1} geometry is also discussed in view of its similar boundary structure. Based on the analysis, we propose a very simple geometric criterion to distinguish coupling from entanglement effects among two sets of degrees of freedom associated with each of the disconnected parts of the boundary.
Self-dual warped AdS3 black holes
NASA Astrophysics Data System (ADS)
Chen, Bin; Ning, Bo
2010-12-01
We study a new class of solutions of three-dimensional topological massive gravity. These solutions can be taken as nonextremal black holes, with their extremal counterparts being discrete quotients of spacelike warped AdS3 along the U(1)L isometry. We study the thermodynamics of these black holes and show that the first law is satisfied. We also show that for consistent boundary conditions, the asymptotic symmetry generators form only one copy of the Virasoro algebra with central charge cL=(4νℓ)/(G(ν2+3)), with which the Cardy formula reproduces the black hole entropy. We compute the real-time correlators of scalar perturbations and find a perfect match with the dual conformal field theory (CFT) predictions. Our study provides a novel example of warped AdS/CFT correspondence: the self-dual warped AdS3 black hole is dual to a CFT with nonvanishing left central charge. Moreover, our investigation suggests that the quantum topological massive gravity asymptotic to the same spacelike warped AdS3 in different consistent ways may be dual to different two-dimensional CFTs.
Variance reduction methods applied to deep-penetration problems
Cramer, S.N.
1984-01-01
All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course.
Identifiability, stratification and minimum variance estimation of causal effects.
Tong, Xingwei; Zheng, Zhongguo; Geng, Zhi
2005-10-15
The weakest sufficient condition for the identifiability of causal effects is the weakly ignorable treatment assignment, which implies that potential responses are independent of treatment assignment in each fine subpopulation stratified by a covariate. In this paper, we expand the independence that holds in fine subpopulations to the case that the independence may also hold in several coarse subpopulations, each of which consists of several fine subpopulations and may have overlaps with other coarse subpopulations. We first show that the identifiability of causal effects occurs if and only if the coarse subpopulations partition the whole population. We then propose a principle, called minimum variance principle, which says that the estimator possessing the minimum variance is preferred, in dealing with the stratification and the estimation of the causal effects. The simulation results with the detail programming and a practical example demonstrate that it is a feasible and reasonable way to achieve our goals. PMID:16149123
Compounding approach for univariate time series with nonstationary variances.
Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich
2015-12-01
A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances. PMID:26764768
Female copying increases the variance in male mating success.
Wade, M J; Pruett-Jones, S G
1990-08-01
Theoretical models of sexual selection assume that females choose males independently of the actions and choice of other individual females. Variance in male mating success in promiscuous species is thus interpreted as a result of phenotypic differences among males which females perceive and to which they respond. Here we show that, if some females copy the behavior of other females in choosing mates, the variance in male mating success and therefore the opportunity for sexual selection is greatly increased. Copying behavior is most likely in non-resource-based harem and lek mating systems but may occur in polygynous, territorial systems as well. It can be shown that copying behavior by females is an adaptive alternative to random choice whenever there is a cost to mate choice. We develop a statistical means of estimating the degree of female copying in natural populations where it occurs. PMID:2377613
Response variance in functional maps: neural darwinism revisited.
Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei
2013-01-01
The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population. PMID:23874733
Compounding approach for univariate time series with nonstationary variances
NASA Astrophysics Data System (ADS)
Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich
2015-12-01
A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.
Fidelity between Gaussian mixed states with quantum state quadrature variances
NASA Astrophysics Data System (ADS)
Hai-Long, Zhang; Chun, Zhou; Jian-Hong, Shi; Wan-Su, Bao
2016-04-01
In this paper, from the original definition of fidelity in a pure state, we first give a well-defined expansion fidelity between two Gaussian mixed states. It is related to the variances of output and input states in quantum information processing. It is convenient to quantify the quantum teleportation (quantum clone) experiment since the variances of the input (output) state are measurable. Furthermore, we also give a conclusion that the fidelity of a pure input state is smaller than the fidelity of a mixed input state in the same quantum information processing. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002) and the Foundation of Science and Technology on Information Assurance Laboratory (Grant No. KJ-14-001).
A surface layer variance heat budget for ENSO
NASA Astrophysics Data System (ADS)
Boucharel, Julien; Timmermann, Axel; Santoso, Agus; England, Matthew H.; Jin, Fei-Fei; Balmaseda, Magdalena A.
2015-05-01
Characteristics of the El Niño-Southern Oscillation (ENSO), such as frequency, propagation, spatial extent, and amplitude, strongly depend on the climatological background state of the tropical Pacific. Multidecadal changes in the ocean mean state are hence likely to modulate ENSO properties. To better link background state variations with low-frequency amplitude changes of ENSO, we develop a diagnostic framework that determines locally the contributions of different physical feedback terms on the ocean surface temperature variance. Our analysis shows that multidecadal changes of ENSO variance originate from the delicate balance between the background-state-dependent positive thermocline feedback and the atmospheric damping of sea surface temperatures anomalies. The role of higher-order processes and atmospheric and oceanic nonlinearities is also discussed. The diagnostic tool developed here can be easily applied to other tropical ocean areas and climate phenomena.
A Variance Based Active Learning Approach for Named Entity Recognition
NASA Astrophysics Data System (ADS)
Hassanzadeh, Hamed; Keyvanpour, Mohammadreza
The cost of manually annotating corpora is one of the significant issues in many text based tasks such as text mining, semantic annotation and generally information extraction. Active Learning is an approach that deals with reduction of labeling costs. In this paper we proposed an effective active learning approach based on minimal variance that reduces manual annotation cost by using a small number of manually labeled examples. In our approach we use a confidence measure based on the model's variance that reaches a considerable accuracy for annotating entities. Conditional Random Field (CRF) is chosen as the underlying learning model due to its promising performance in many sequence labeling tasks. The experiments show that the proposed method needs considerably fewer manual labeled samples to produce a desirable result.
No evidence for anomalously low variance circles on the sky
Moss, Adam; Scott, Douglas; Zibin, James P. E-mail: dscott@phas.ubc.ca
2011-04-01
In a recent paper, Gurzadyan and Penrose claim to have found directions on the sky centred on which are circles of anomalously low variance in the cosmic microwave background (CMB). These features are presented as evidence for a particular picture of the very early Universe. We attempted to repeat the analysis of these authors, and we can indeed confirm that such variations do exist in the temperature variance for annuli around points in the data. However, we find that this variation is entirely expected in a sky which contains the usual CMB anisotropies. In other words, properly simulated Gaussian CMB data contain just the sorts of variations claimed. Gurzadyan and Penrose have not found evidence for pre-Big Bang phenomena, but have simply re-discovered that the CMB contains structure.
Variance estimation for radiation analysis and multi-sensor fusion.
Mitchell, Dean James
2010-09-01
Variance estimates that are used in the analysis of radiation measurements must represent all of the measurement and computational uncertainties in order to obtain accurate parameter and uncertainty estimates. This report describes an approach for estimating components of the variance associated with both statistical and computational uncertainties. A multi-sensor fusion method is presented that renders parameter estimates for one-dimensional source models based on input from different types of sensors. Data obtained with multiple types of sensors improve the accuracy of the parameter estimates, and inconsistencies in measurements are also reflected in the uncertainties for the estimated parameter. Specific analysis examples are presented that incorporate a single gross neutron measurement with gamma-ray spectra that contain thousands of channels. The parameter estimation approach is tolerant of computational errors associated with detector response functions and source model approximations.
Methods for variance reduction in Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Bixler, Joel N.; Hokr, Brett H.; Winblad, Aidan; Elpers, Gabriel; Zollars, Byron; Thomas, Robert J.
2016-03-01
Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, due to the probabilistic nature of these simulations, large numbers of photons are often required in order to generate relevant results. Here, we present methods for reduction in the variance of dose distribution in a computational volume. Dose distribution is computed via tracing of a large number of rays, and tracking the absorption and scattering of the rays within discrete voxels that comprise the volume. Variance reduction is shown here using quasi-random sampling, interaction forcing for weakly scattering media, and dose smoothing via bi-lateral filtering. These methods, along with the corresponding performance enhancements are detailed here.
ERIC Educational Resources Information Center
Liu, Duo; Chen, Xi; Chung, Kevin K. H.
2015-01-01
This study examined the relation between the performance in a visual search task and reading ability in 92 third-grade Hong Kong Chinese children. The visual search task, which is considered a measure of visual-spatial attention, accounted for unique variance in Chinese character reading after controlling for age, nonverbal intelligence,…
Nurse Value-Added and Patient Outcomes in Acute Care
Yakusheva, Olga; Lindrooth, Richard; Weiss, Marianne
2014-01-01
Objective The aims of the study were to (1) estimate the relative nurse effectiveness, or individual nurse value-added (NVA), to patients’ clinical condition change during hospitalization; (2) examine nurse characteristics contributing to NVA; and (3) estimate the contribution of value-added nursing care to patient outcomes. Data Sources/Study Setting Electronic data on 1,203 staff nurses matched with 7,318 adult medical–surgical patients discharged between July 1, 2011 and December 31, 2011 from an urban Magnet-designated, 854-bed teaching hospital. Study Design Retrospective observational longitudinal analysis using a covariate-adjustment value-added model with nurse fixed effects. Data Collection/Extraction Methods Data were extracted from the study hospital's electronic patient records and human resources databases. Principal Findings Nurse effects were jointly significant and explained 7.9 percent of variance in patient clinical condition change during hospitalization. NVA was positively associated with having a baccalaureate degree or higher (0.55, p = .04) and expertise level (0.66, p = .03). NVA contributed to patient outcomes of shorter length of stay and lower costs. Conclusions Nurses differ in their value-added to patient outcomes. The ability to measure individual nurse relative value-added opens the possibility for development of performance metrics, performance-based rankings, and merit-based salary schemes to improve patient outcomes and reduce costs. PMID:25256089
Analysis and application of minimum variance discrete time system identification
NASA Technical Reports Server (NTRS)
Kaufman, H.; Kotob, S.
1975-01-01
An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.
Constraining the local variance of H0 from directional analyses
NASA Astrophysics Data System (ADS)
Bengaly, C. A. P., Jr.
2016-04-01
We evaluate the local variance of the Hubble Constant H0 with low-z Type Ia Supernovae (SNe). Our analyses are performed using a hemispherical comparison method in order to test whether taking the bulk flow motion into account can reconcile the measurement of the Hubble Constant H0 from standard candles (H0 = 73.8±2.4 km s-1 Mpc -1) with that of the Planck's Cosmic Microwave Background data (H0 = 67.8 ± 0.9km s-1 Mpc-1). We obtain that H0 ranges from 68.9±0.5 km s-1 Mpc-1 to 71.2±0.7 km s-1 Mpc-1 through the celestial sphere (1σ uncertainty), implying a Hubble Constant maximal variance of δH0 = (2.30±0.86) km s-1 Mpc-1 towards the (l,b) = (315°,27°) direction. Interestingly, this result agrees with the bulk flow direction estimates found in the literature, as well as previous evaluations of the H0 variance due to the presence of nearby inhomogeneities. We assess the statistical significance of this result with different prescriptions of Monte Carlo simulations, obtaining moderate statistical significance, i.e., 68.7% confidence level (CL) for such variance. Furthermore, we test the hypothesis of a higher H0 value in the presence of a bulk flow velocity dipole, finding some evidence for this result which, however, cannot be claimed to be significant due to the current large uncertainty in the SNe distance modulus. Then, we conclude that the tension between different H0 determinations can plausibly be caused to the bulk flow motion of the local Universe, even though the current incompleteness of the SNe data set, both in terms of celestial coverage and distance uncertainties, does not allow a high statistical significance for these results or a definitive conclusion about this issue.
End-state comfort and joint configuration variance during reaching
Solnik, Stanislaw; Pazin, Nemanja; Coelho, Chase J.; Rosenbaum, David A.; Scholz, John P.; Zatsiorsky, Vladimir M.; Latash, Mark L.
2013-01-01
This study joined two approaches to motor control. The first approach comes from cognitive psychology and is based on the idea that goal postures and movements are chosen to satisfy task-specific constraints. The second approach comes from the principle of motor abundance and is based on the idea that control of apparently redundant systems is associated with the creation of multi-element synergies stabilizing important performance variables. The first approach has been tested by relying on psychophysical ratings of comfort. The second approach has been tested by estimating variance along different directions in the space of elemental variables such as joint postures. The two approaches were joined here. Standing subjects performed series of movements in which they brought a hand-held pointer to each of four targets oriented within a frontal plane, close to or far from the body. The subjects were asked to rate the comfort of the final postures, and the variance of their joint configurations during the steady state following pointing was quantified with respect to pointer endpoint position and pointer orientation. The subjects showed consistent patterns of comfort ratings among the targets, and all movements were characterized by multi-joint synergies stabilizing both pointer endpoint position and orientation. Contrary to what was expected, less comfortable postures had higher joint configuration variance than did more comfortable postures without major changes in the synergy indices. Multi-joint synergies stabilized the pointer position and orientation similarly across a range of comfortable/uncomfortable postures. The results are interpreted in terms conducive to the two theoretical frameworks underlying this work, one focusing on comfort ratings reflecting mean postures adopted for different targets and the other focusing on indices of joint configuration variance. PMID:23288326
The Third-Difference Approach to Modified Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
1995-01-01
This study gives strategies for estimating the modified Allan variance (mvar) and formulas for computing the equivalent degrees of freedom (edf) of the estimators. A third-difference formulation of mvar leads to a tractable formula for edf in the presence of power-law phase noise. The effect of estimation stride on edf is tabulated. First-degree rational-function approximations for edf are derived.
Analysis of Variance in the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
Deloach, Richard
2010-01-01
This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.
A new variance-based global sensitivity analysis technique
NASA Astrophysics Data System (ADS)
Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen
2013-11-01
A new set of variance-based sensitivity indices, called W-indices, is proposed. Similar to the Sobol's indices, both main and total effect indices are defined. The W-main effect indices measure the average reduction of model output variance when the ranges of a set of inputs are reduced, and the total effect indices quantify the average residual variance when the ranges of the remaining inputs are reduced. Geometrical interpretations show that the W-indices gather the full information of the variance ratio function, whereas, Sobol's indices only reflect the marginal information. Then the double-loop-repeated-set Monte Carlo (MC) (denoted as DLRS MC) procedure, the double-loop-single-set MC (denoted as DLSS MC) procedure and the model emulation procedure are introduced for estimating the W-indices. It is shown that the DLRS MC procedure is suitable for computing all the W-indices despite its highly computational cost. The DLSS MC procedure is computationally efficient, however, it is only applicable for computing low order indices. The model emulation is able to estimate all the W-indices with low computational cost as long as the model behavior is correctly captured by the emulator. The Ishigami function, a modified Sobol's function and two engineering models are utilized for comparing the W- and Sobol's indices and verifying the efficiency and convergence of the three numerical methods. Results show that, for even an additive model, the W-total effect index of one input may be significantly larger than its W-main effect index. This indicates that there may exist interaction effects among the inputs of an additive model when their distribution ranges are reduced.
Cosmic variance of the galaxy cluster weak lensing signal
NASA Astrophysics Data System (ADS)
Gruen, D.; Seitz, S.; Becker, M. R.; Friedrich, O.; Mana, A.
2015-06-01
Intrinsic variations of the projected density profiles of clusters of galaxies at fixed mass are a source of uncertainty for cluster weak lensing. We present a semi-analytical model to account for this effect, based on a combination of variations in halo concentration, ellipticity and orientation, and the presence of correlated haloes. We calibrate the parameters of our model at the 10 per cent level to match the empirical cosmic variance of cluster profiles at M_{200m}≈ 10^{14}ldots 10^{15} h^{-1}{ M_{⊙}}, z = 0.25…0.5 in a cosmological simulation. We show that weak lensing measurements of clusters significantly underestimate mass uncertainties if intrinsic profile variations are ignored, and that our model can be used to provide correct mass likelihoods. Effects on the achievable accuracy of weak lensing cluster mass measurements are particularly strong for the most massive clusters and deep observations (with ≈20 per cent uncertainty from cosmic variance alone at M_{200m}≈ 10^{15} h^{-1}{ M_{⊙}} and z = 0.25), but significant also under typical ground-based conditions. We show that neglecting intrinsic profile variations leads to biases in the mass-observable relation constrained with weak lensing, both for intrinsic scatter and overall scale (the latter at the 15 per cent level). These biases are in excess of the statistical errors of upcoming surveys and can be avoided if the cosmic variance of cluster profiles is accounted for.
The Column Density Variance-{\\cal M}_s Relationship
NASA Astrophysics Data System (ADS)
Burkhart, Blakesley; Lazarian, A.
2012-08-01
Although there is a wealth of column density tracers for both the molecular and diffuse interstellar medium, there are few observational studies investigating the relationship between the density variance (σ2) and the sonic Mach number ({\\cal M}_s). This is in part due to the fact that the σ2-{\\cal M}_s relationship is derived, via MHD simulations, for the three-dimensional (3D) density variance only, which is not a direct observable. We investigate the utility of a 2D column density \\sigma _{\\Sigma /\\Sigma _0}^2-{\\cal M}_s relationship using solenoidally driven isothermal MHD simulations and find that the best fit follows closely the form of the 3D density \\sigma _{\\rho /\\rho _0}^2-{\\cal M}_s trend but includes a scaling parameter A such that \\sigma _{\\ln (\\Sigma /\\Sigma _0)}^2=A\\times \\ln (1+b^2{\\cal M}_s^2), where A = 0.11 and b = 1/3. This relation is consistent with the observational data reported for the Taurus and IC 5146 molecular clouds with b = 0.5 and A = 0.16, and b = 0.5 and A = 0.12, respectively. These results open up the possibility of using the 2D column density values of σ2 for investigations of the relation between the sonic Mach number and the probability distribution function (PDF) variance in addition to existing PDF sonic Mach number relations.
Asymptotically robust variance estimation for person-time incidence rates.
Scosyrev, Emil
2016-05-01
Person-time incidence rates are frequently used in medical research. However, standard estimation theory for this measure of event occurrence is based on the assumption of independent and identically distributed (iid) exponential event times, which implies that the hazard function remains constant over time. Under this assumption and assuming independent censoring, observed person-time incidence rate is the maximum-likelihood estimator of the constant hazard, and asymptotic variance of the log rate can be estimated consistently by the inverse of the number of events. However, in many practical applications, the assumption of constant hazard is not very plausible. In the present paper, an average rate parameter is defined as the ratio of expected event count to the expected total time at risk. This rate parameter is equal to the hazard function under constant hazard. For inference about the average rate parameter, an asymptotically robust variance estimator of the log rate is proposed. Given some very general conditions, the robust variance estimator is consistent under arbitrary iid event times, and is also consistent or asymptotically conservative when event times are independent but nonidentically distributed. In contrast, the standard maximum-likelihood estimator may become anticonservative under nonconstant hazard, producing confidence intervals with less-than-nominal asymptotic coverage. These results are derived analytically and illustrated with simulations. The two estimators are also compared in five datasets from oncology studies. PMID:26439107
Relationship between Allan variances and Kalman Filter parameters
NASA Technical Reports Server (NTRS)
Vandierendonck, A. J.; Mcgraw, J. B.; Brown, R. G.
1984-01-01
A relationship was constructed between the Allan variance parameters (H sub z, H sub 1, H sub 0, H sub -1 and H sub -2) and a Kalman Filter model that would be used to estimate and predict clock phase, frequency and frequency drift. To start with the meaning of those Allan Variance parameters and how they are arrived at for a given frequency source is reviewed. Although a subset of these parameters is arrived at by measuring phase as a function of time rather than as a spectral density, they all represent phase noise spectral density coefficients, though not necessarily that of a rational spectral density. The phase noise spectral density is then transformed into a time domain covariance model which can then be used to derive the Kalman Filter model parameters. Simulation results of that covariance model are presented and compared to clock uncertainties predicted by Allan variance parameters. A two state Kalman Filter model is then derived and the significance of each state is explained.
Estimating Predictive Variance for Statistical Gas Distribution Modelling
Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo
2009-05-23
Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.
Quantifying variances in comparative RNA secondary structure prediction
2013-01-01
Background With the advancement of next-generation sequencing and transcriptomics technologies, regulatory effects involving RNA, in particular RNA structural changes are being detected. These results often rely on RNA secondary structure predictions. However, current approaches to RNA secondary structure modelling produce predictions with a high variance in predictive accuracy, and we have little quantifiable knowledge about the reasons for these variances. Results In this paper we explore a number of factors which can contribute to poor RNA secondary structure prediction quality. We establish a quantified relationship between alignment quality and loss of accuracy. Furthermore, we define two new measures to quantify uncertainty in alignment-based structure predictions. One of the measures improves on the “reliability score” reported by PPfold, and considers alignment uncertainty as well as base-pair probabilities. The other measure considers the information entropy for SCFGs over a space of input alignments. Conclusions Our predictive accuracy improves on the PPfold reliability score. We can successfully characterize many of the underlying reasons for and variances in poor prediction. However, there is still variability unaccounted for, which we therefore suggest comes from the RNA secondary structure predictive model itself. PMID:23634662
Reduced Variance for Material Sources in Implicit Monte Carlo
Urbatsch, Todd J.
2012-06-25
Implicit Monte Carlo (IMC), a time-implicit method due to Fleck and Cummings, is used for simulating supernovae and inertial confinement fusion (ICF) systems where x-rays tightly and nonlinearly interact with hot material. The IMC algorithm represents absorption and emission within a timestep as an effective scatter. Similarly, the IMC time-implicitness splits off a portion of a material source directly into the radiation field. We have found that some of our variance reduction and particle management schemes will allow large variances in the presence of small, but important, material sources, as in the case of ICF hot electron preheat sources. We propose a modification of our implementation of the IMC method in the Jayenne IMC Project. Instead of battling the sampling issues associated with a small source, we bypass the IMC implicitness altogether and simply deterministically update the material state with the material source if the temperature of the spatial cell is below a user-specified cutoff. We describe the modified method and present results on a test problem that show the elimination of variance for small sources.
VAPOR: variance-aware per-pixel optimal resource allocation.
Eisenberg, Yiftach; Zhai, Fan; Pappas, Thrasyvoulos N; Berry, Randall; Katsaggelos, Aggelos K
2006-02-01
Characterizing the video quality seen by an end-user is a critical component of any video transmission system. In packet-based communication systems, such as wireless channels or the Internet, packet delivery is not guaranteed. Therefore, from the point-of-view of the transmitter, the distortion at the receiver is a random variable. Traditional approaches have primarily focused on minimizing the expected value of the end-to-end distortion. This paper explores the benefits of accounting for not only the mean, but also the variance of the end-to-end distortion when allocating limited source and channel resources. By accounting for the variance of the distortion, the proposed approach increases the reliability of the system by making it more likely that what the end-user sees, closely resembles the mean end-to-end distortion calculated at the transmitter. Experimental results demonstrate that variance-aware resource allocation can help limit error propagation and is more robust to channel-mismatch than approaches whose goal is to strictly minimize the expected distortion. PMID:16479799
Dominance Genetic Variance for Traits Under Directional Selection in Drosophila serrata
Sztepanacz, Jacqueline L.; Blows, Mark W.
2015-01-01
In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait–fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. PMID:25783700
Dominance genetic variance for traits under directional selection in Drosophila serrata.
Sztepanacz, Jacqueline L; Blows, Mark W
2015-05-01
In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait-fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. PMID:25783700
Warped AdS3/dipole-CFT duality
NASA Astrophysics Data System (ADS)
Song, Wei; Strominger, Andrew
2012-05-01
String theory contains solutions with {{SL}}( {{2},{R}} ){{R}} × {{U}}{( {1} )_L} -invariant warped AdS3 (WAdS3) factors arising as continuous deformations of ordinary AdS3 factors. We propose that some of these are holographically dual to the IR limits of nonlocal dipole-deformed 2D D-brane gauge theories, referred to as "dipole CFTs". Neither the bulk nor boundary theories are currently well-understood, and consequences of the proposed duality for both sides is investigated. The bulk entropy-area law suggests that dipole CFTs have (at large N) a high-energy density of states which does not depend on the deformation parameter. Putting the boundary theory on a spatial circle leads to closed timelike curves in the bulk, suggesting a relation of the latter to dipole-type nonlocality.
New boundary conditions for AdS3
NASA Astrophysics Data System (ADS)
Compère, Geoffrey; Song, Wei; Strominger, Andrew
2013-05-01
New chiral boundary conditions are found for quantum gravity with matter on AdS3. The associated asymptotic symmetry group is generated by a single right-moving U(1) Kac-Moody-Virasoro algebra with {c_R}={3ℓ}/2G . The Kac-Moody zero mode generates global left-moving translations and equals, for a BTZ black hole, the sum of the total mass and spin. The level is positive about the global vacuum and negative in the black hole sector, corresponding to ergosphere formation. Realizations arising in Chern-Simons gravity and string theory are analyzed. The new boundary conditions are shown to naturally arise for warped AdS3 in the limit that the warp parameter is taken to zero.
Observing quantum gravity in asymptotically AdS space
NASA Astrophysics Data System (ADS)
Emelyanov, Slava
2015-12-01
The question is studied of whether an observer can discover quantum gravity in the semiclassical regime. It is shown that it is indeed possible to probe a certain quantum gravity effect by employing an appropriately designed detector. The effect is related to the possibility of having topologically inequivalent geometries in the path-integral approach at the same time. A conformal field theory (CFT) state which is expected to describe the eternal anti-de Sitter (AdS) black hole in the large-N limit is discussed. It is argued under certain assumptions that the black hole boundary should be merely a patch of the entire AdS boundary. This leads then to a conclusion that that CFT state is the ordinary CFT vacuum restricted to that patch. If existent, the bulk CFT operators can behave as the ordinary semiclassical quantum field theory in the large-N limit in the weak sense.
Semiclassical Virasoro blocks from AdS3 gravity
NASA Astrophysics Data System (ADS)
Hijano, Eliot; Kraus, Per; Perlmutter, Eric; Snively, River
2015-12-01
We present a unified framework for the holographic computation of Virasoro conformal blocks at large central charge. In particular, we provide bulk constructions that correctly reproduce all semiclassical Virasoro blocks that are known explicitly from conformal field theory computations. The results revolve around the use of geodesic Witten diagrams, recently introduced in [1], evaluated in locally AdS3 geometries generated by backreaction of heavy operators. We also provide an alternative computation of the heavy-light semiclassical block — in which two external operators become parametrically heavy — as a certain scattering process involving higher spin gauge fields in AdS3; this approach highlights the chiral nature of Virasoro blocks. These techniques may be systematically extended to compute corrections to these blocks and to interpolate amongst the different semiclassical regimes.
Alday-Maldacena Duality and AdS Plateau Problem
NASA Astrophysics Data System (ADS)
Morozov, A.
A short summary of approximate approach to the study of minimal surfaces in AdS, based on solving Nambu-Goto equations iteratively. Today, after partial denunciation of the BDS conjecture, this looks like the only constructive approach to understanding the ways of its possible modification and thus to saving the Alday-Maldacena duality. Numerous open technical problems are explicitly formulated throughout the text.
On information loss in AdS3/CFT2
NASA Astrophysics Data System (ADS)
Fitzpatrick, A. Liam; Kaplan, Jared; Li, Daliang; Wang, Junpu
2016-05-01
We discuss information loss from black hole physics in AdS3, focusing on two sharp signatures infecting CFT2 correlators at large central charge c: `forbidden singularities' arising from Euclidean-time periodicity due to the effective Hawking temperature, and late-time exponential decay in the Lorentzian region. We study an infinite class of examples where forbidden singularities can be resolved by non-perturbative effects at finite c, and we show that the resolution has certain universal features that also apply in the general case. Analytically continuing to the Lorentzian regime, we find that the non-perturbative effects that resolve forbidden singularities qualitatively change the behavior of correlators at times t ˜ S BH , the black hole entropy. This may resolve the exponential decay of correlators at late times in black hole backgrounds. By Borel resumming the 1 /c expansion of exact examples, we explicitly identify `information-restoring' effects from heavy states that should correspond to classical solutions in AdS3. Our results suggest a line of inquiry towards a more precise formulation of the gravitational path integral in AdS3.
Supersymmetric giant graviton solutions in AdS3
NASA Astrophysics Data System (ADS)
Mandal, Gautam; Raju, Suvrat; Smedbäck, Mikael
2008-02-01
We parametrize all classical probe brane configurations that preserve four supersymmetries in (a) the extremal D1-D5 geometry, (b) the extremal D1-D5-P geometry, (c) the smooth D1-D5 solutions proposed by Lunin and Mathur, and (d) global AdS3×S3×T4/K3. These configurations consist of D1 branes, D5 branes, and bound states of D5 and D1 branes with the property that a particular Killing vector is tangent to the brane world volume at each point. We show that the supersymmetric sector of the D5-brane world volume theory may be analyzed in an effective 1+1 dimensional framework that places it on the same footing as D1 branes. In global AdS and the corresponding Lunin-Mathur solution, the solutions we describe are “bound” to the center of AdS for generic parameters and cannot escape to infinity. We show that these probes only exist on the submanifold of moduli space where the background BNS field and theta angle vanish. We quantize these probes in the near-horizon region of the extremal D1-D5 geometry and obtain the theory of long strings discussed by Seiberg and Witten.
Estimation of Noise-Free Variance to Measure Heterogeneity
Winkler, Tilo; Melo, Marcos F. Vidal; Degani-Costa, Luiza H.; Harris, R. Scott; Correia, John A.; Musch, Guido; Venegas, Jose G.
2015-01-01
Variance is a statistical parameter used to characterize heterogeneity or variability in data sets. However, measurements commonly include noise, as random errors superimposed to the actual value, which may substantially increase the variance compared to a noise-free data set. Our aim was to develop and validate a method to estimate noise-free spatial heterogeneity of pulmonary perfusion using dynamic positron emission tomography (PET) scans. On theoretical grounds, we demonstrate a linear relationship between the total variance of a data set derived from averages of n multiple measurements, and the reciprocal of n. Using multiple measurements with varying n yields estimates of the linear relationship including the noise-free variance as the constant parameter. In PET images, n is proportional to the number of registered decay events, and the variance of the image is typically normalized by the square of its mean value yielding a coefficient of variation squared (CV2). The method was evaluated with a Jaszczak phantom as reference spatial heterogeneity (CVr2) for comparison with our estimate of noise-free or ‘true’ heterogeneity (CVt2). We found that CVt2 was only 5.4% higher than CVr2. Additional evaluations were conducted on 38 PET scans of pulmonary perfusion using 13NN-saline injection. The mean CVt2 was 0.10 (range: 0.03–0.30), while the mean CV2 including noise was 0.24 (range: 0.10–0.59). CVt2 was in average 41.5% of the CV2 measured including noise (range: 17.8–71.2%). The reproducibility of CVt2 was evaluated using three repeated PET scans from five subjects. Individual CVt2 were within 16% of each subject's mean and paired t-tests revealed no difference among the results from the three consecutive PET scans. In conclusion, our method provides reliable noise-free estimates of CVt2 in PET scans, and may be useful for similar statistical problems in experimental data. PMID:25906374
Automated Variance Reduction Applied to Nuclear Well-Logging Problems
Wagner, John C; Peplow, Douglas E.; Evans, Thomas M
2009-01-01
The Monte Carlo method enables detailed, explicit geometric, energy and angular representations, and hence is considered to be the most accurate method available for solving complex radiation transport problems. Because of its associated accuracy, the Monte Carlo method is widely used in the petroleum exploration industry to design, benchmark, and simulate nuclear well-logging tools. Nuclear well-logging tools, which contain neutron and/or gamma sources and two or more detectors, are placed in boreholes that contain water (and possibly other fluids) and that are typically surrounded by a formation (e.g., limestone, sandstone, calcites, or a combination). The response of the detectors to radiation returning from the surrounding formation is used to infer information about the material porosity, density, composition, and associated characteristics. Accurate computer simulation is a key aspect of this exploratory technique. However, because this technique involves calculating highly precise responses (at two or more detectors) based on radiation that has interacted with the surrounding formation, the transport simulations are computationally intensive, requiring significant use of variance reduction techniques, parallel computing, or both. Because of the challenging nature of these problems, nuclear well-logging problems have frequently been used to evaluate the effectiveness of variance reduction techniques (e.g., Refs. 1-4). The primary focus of these works has been on improving the computational efficiency associated with calculating the response at the most challenging detector location, which is typically the detector furthest from the source. Although the objective of nuclear well-logging simulations is to calculate the response at multiple detector locations, until recently none of the numerous variance reduction methods/techniques has been well-suited to simultaneous optimization of multiple detector (tally) regions. Therefore, a separate calculation is
Automated Variance Reduction Applied to Nuclear Well-Logging Problems
Wagner, John C; Peplow, Douglas E.; Evans, Thomas M
2008-01-01
The Monte Carlo method enables detailed, explicit geometric, energy and angular representations, and hence is considered to be the most accurate method available for solving complex radiation transport problems. Because of its associated accuracy, the Monte Carlo method is widely used in the petroleum exploration industry to design, benchmark, and simulate nuclear well-logging tools. Nuclear well-logging tools, which contain neutron and/or gamma sources and two or more detectors, are placed in boreholes that contain water (and possibly other fluids) and that are typically surrounded by a formation (e.g., limestone, sandstone, calcites, or a combination). The response of the detectors to radiation returning from the surrounding formation is used to infer information about the material porosity, density, composition, and associated characteristics. Accurate computer simulation is a key aspect of this exploratory technique. However, because this technique involves calculating highly precise responses (at two or more detectors) based on radiation that has interacted with the surrounding formation, the transport simulations are computationally intensive, requiring significant use of variance reduction techniques, parallel computing, or both. Because of the challenging nature of these problems, nuclear well-logging problems have frequently been used to evaluate the effectiveness of variance reduction techniques (e.g., Refs. 1-4). The primary focus of these works has been on improving the computational efficiency associated with calculating the response at the most challenging detector location, which is typically the detector furthest from the source. Although the objective of nuclear well-logging simulations is to calculate the response at multiple detector locations, until recently none of the numerous variance reduction methods/techniques has been well-suited to simultaneous optimization of multiple detector (tally) regions. Therefore, a separate calculation is
Sanz, E.; Voss, C.I.
2006-01-01
Inverse modeling studies employing data collected from the classic Henry seawater intrusion problem give insight into several important aspects of inverse modeling of seawater intrusion problems and effective measurement strategies for estimation of parameters for seawater intrusion. Despite the simplicity of the Henry problem, it embodies the behavior of a typical seawater intrusion situation in a single aquifer. Data collected from the numerical problem solution are employed without added noise in order to focus on the aspects of inverse modeling strategies dictated by the physics of variable-density flow and solute transport during seawater intrusion. Covariances of model parameters that can be estimated are strongly dependent on the physics. The insights gained from this type of analysis may be directly applied to field problems in the presence of data errors, using standard inverse modeling approaches to deal with uncertainty in data. Covariance analysis of the Henry problem indicates that in order to generally reduce variance of parameter estimates, the ideal places to measure pressure are as far away from the coast as possible, at any depth, and the ideal places to measure concentration are near the bottom of the aquifer between the center of the transition zone and its inland fringe. These observations are located in and near high-sensitivity regions of system parameters, which may be identified in a sensitivity analysis with respect to several parameters. However, both the form of error distribution in the observations and the observation weights impact the spatial sensitivity distributions, and different choices for error distributions or weights can result in significantly different regions of high sensitivity. Thus, in order to design effective sampling networks, the error form and weights must be carefully considered. For the Henry problem, permeability and freshwater inflow can be estimated with low estimation variance from only pressure or only
Bending AdS waves with new massive gravity
NASA Astrophysics Data System (ADS)
Ayón-Beato, Eloy; Giribet, Gaston; Hassaïne, Mokhtar
2009-05-01
We study AdS-waves in the three-dimensional new theory of massive gravity recently proposed by Bergshoeff, Hohm, and Townsend. The general configuration of this type is derived and shown to exhibit different branches, with different asymptotic behaviors. In particular, for the special fine tuning m2 = ±1/(2l2), solutions with logarithmic fall-off arise, while in the range m2 > -1/(2l2), spacetimes with Schrödinger isometry group are admitted as solutions. Spacetimes that are asymptotically AdS3, both for the Brown-Henneaux and for the weakened boundary conditions, are also identified. The metric function that characterizes the profile of the AdS-wave behaves as a massive excitation on the spacetime, with an effective mass given by meff2 = m2-1/(2l2). For the critical value m2 = -1/(2l2), the value of the effective mass precisely saturates the Breitenlohner-Freedman bound for the AdS3 space where the wave is propagating on. The analogies with the AdS-wave solutions of topologically massive gravity are also discussed. Besides, we consider the coupling of both massive deformations to Einstein gravity and find the exact configurations for the complete theory, discussing all the different branches exhaustively. One of the effects of introducing the Chern-Simons gravitational term is that of breaking the degeneracy in the effective mass of the generic modes of pure New Massive Gravity, producing a fine structure due to parity violation. Another effect is that the zoo of exact logarithmic specimens becomes considerably enlarged.
Brodsky, Stanley J.; de Teramond, Guy F.; /Costa Rica U. /SLAC
2007-02-21
The AdS/CFT correspondence between string theory in AdS space and conformal .eld theories in physical spacetime leads to an analytic, semi-classical model for strongly-coupled QCD which has scale invariance and dimensional counting at short distances and color confinement at large distances. Although QCD is not conformally invariant, one can nevertheless use the mathematical representation of the conformal group in five-dimensional anti-de Sitter space to construct a first approximation to the theory. The AdS/CFT correspondence also provides insights into the inherently non-perturbative aspects of QCD, such as the orbital and radial spectra of hadrons and the form of hadronic wavefunctions. In particular, we show that there is an exact correspondence between the fifth-dimensional coordinate of AdS space z and a specific impact variable {zeta} which measures the separation of the quark and gluonic constituents within the hadron in ordinary space-time. This connection allows one to compute the analytic form of the frame-independent light-front wavefunctions, the fundamental entities which encode hadron properties and allow the computation of decay constants, form factors, and other exclusive scattering amplitudes. New relativistic lightfront equations in ordinary space-time are found which reproduce the results obtained using the 5-dimensional theory. The effective light-front equations possess remarkable algebraic structures and integrability properties. Since they are complete and orthonormal, the AdS/CFT model wavefunctions can also be used as a basis for the diagonalization of the full light-front QCD Hamiltonian, thus systematically improving the AdS/CFT approximation.
Ultraviolet asymptotics and singular dynamics of AdS perturbations
NASA Astrophysics Data System (ADS)
Craps, Ben; Evnin, Oleg; Vanhoof, Joris
2015-10-01
Important insights into the dynamics of spherically symmetric AdS-scalar field perturbations can be obtained by considering a simplified time-averaged theory accurately describing perturbations of amplitude ɛ on time-scales of order 1/ ɛ 2. The coefficients of the time-averaged equations are complicated expressions in terms of the AdS scalar field mode functions, which are in turn related to the Jacobi polynomials. We analyze the behavior of these coefficients for high frequency modes. The resulting asymptotics can be useful for understanding the properties of the finite-time singularity in solutions of the time-averaged theory recently reported in the literature. We highlight, in particular, the gauge dependence of this asymptotics, with respect to the two most commonly used gauges. The harsher growth of the coefficients at large frequencies in higher-dimensional AdS suggests strengthening of turbulent instabilities in higher dimensions. In the course of our derivations, we arrive at recursive relations for the coefficients of the time-averaged theory that are likely to be useful for evaluating them more efficiently in numerical simulations.
Unique Challenges Testing SDRs for Space
NASA Technical Reports Server (NTRS)
Johnson, Sandra; Chelmins, David; Downey, Joseph; Nappier, Jennifer
2013-01-01
This paper describes the approach used by the Space Communication and Navigation (SCaN) Testbed team to qualify three Software Defined Radios (SDR) for operation in space and the characterization of the platform to enable upgrades on-orbit. The three SDRs represent a significant portion of the new technologies being studied on board the SCAN Testbed, which is operating on an external truss on the International Space Station (ISS). The SCaN Testbed provides experimenters an opportunity to develop and demonstrate experimental waveforms and applications for communication, networking, and navigation concepts and advance the understanding of developing and operating SDRs in space. Qualifying a Software Defined Radio for the space environment requires additional consideration versus a hardware radio. Tests that incorporate characterization of the platform to provide information necessary for future waveforms, which might exercise extended capabilities of the hardware, are needed. The development life cycle for the radio follows the software development life cycle, where changes can be incorporated at various stages of development and test. It also enables flexibility to be added with minor additional effort. Although this provides tremendous advantages, managing the complexity inherent in a software implementation requires a testing beyond the traditional hardware radio test plan. Due to schedule and resource limitations and parallel development activities, the subsystem testing of the SDRs at the vendor sites was primarily limited to typical fixed transceiver type of testing. NASA's Glenn Research Center (GRC) was responsible for the integration and testing of the SDRs into the SCaN Testbed system and conducting the investigation of the SDR to advance the technology to be accepted by missions. This paper will describe the unique tests that were conducted at both the subsystem and system level, including environmental testing, and present results. For example, test
Unique Challenges Testing SDRs for Space
NASA Technical Reports Server (NTRS)
Chelmins, David; Downey, Joseph A.; Johnson, Sandra K.; Nappier, Jennifer M.
2013-01-01
This paper describes the approach used by the Space Communication and Navigation (SCaN) Testbed team to qualify three Software Defined Radios (SDR) for operation in space and the characterization of the platform to enable upgrades on-orbit. The three SDRs represent a significant portion of the new technologies being studied on board the SCAN Testbed, which is operating on an external truss on the International Space Station (ISS). The SCaN Testbed provides experimenters an opportunity to develop and demonstrate experimental waveforms and applications for communication, networking, and navigation concepts and advance the understanding of developing and operating SDRs in space. Qualifying a Software Defined Radio for the space environment requires additional consideration versus a hardware radio. Tests that incorporate characterization of the platform to provide information necessary for future waveforms, which might exercise extended capabilities of the hardware, are needed. The development life cycle for the radio follows the software development life cycle, where changes can be incorporated at various stages of development and test. It also enables flexibility to be added with minor additional effort. Although this provides tremendous advantages, managing the complexity inherent in a software implementation requires a testing beyond the traditional hardware radio test plan. Due to schedule and resource limitations and parallel development activities, the subsystem testing of the SDRs at the vendor sites was primarily limited to typical fixed transceiver type of testing. NASA s Glenn Research Center (GRC) was responsible for the integration and testing of the SDRs into the SCaN Testbed system and conducting the investigation of the SDR to advance the technology to be accepted by missions. This paper will describe the unique tests that were conducted at both the subsystem and system level, including environmental testing, and present results. For example, test
An Empirical Temperature Variance Source Model in Heated Jets
NASA Technical Reports Server (NTRS)
Khavaran, Abbas; Bridges, James
2012-01-01
An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.
FMRI group analysis combining effect estimates and their variances.
Chen, Gang; Saad, Ziad S; Nath, Audrey R; Beauchamp, Michael S; Cox, Robert W
2012-03-01
Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an approach that does not make such strong assumptions, and present a computationally efficient frequentist approach to FMRI group analysis, which we term mixed-effects multilevel analysis (MEMA), that incorporates both the variability across subjects and the precision estimate of each effect of interest from individual subject analyses. On average, the more accurate tests result in higher statistical power, especially when conventional variance assumptions do not hold, or in the presence of outliers. In addition, various heterogeneity measures are available with MEMA that may assist the investigator in further improving the modeling. Our method allows group effect t-tests and comparisons among conditions and among groups. In addition, it has the capability to incorporate subject-specific covariates such as age, IQ, or behavioral data. Simulations were performed to illustrate power comparisons and the capability of controlling type I errors among various significance testing methods, and the results indicated that the testing statistic we adopted struck a good balance between power gain and type I error control. Our approach is instantiated in an open-source, freely distributed program that may be used on any dataset stored in the universal neuroimaging file transfer (NIfTI) format. To date, the main impediment for more accurate testing that incorporates both within- and cross-subject variability has been the high computational cost. Our efficient implementation makes this approach
Monte Carlo calculation of specific absorbed fractions: variance reduction techniques
NASA Astrophysics Data System (ADS)
Díaz-Londoño, G.; García-Pareja, S.; Salvat, F.; Lallena, A. M.
2015-04-01
The purpose of the present work is to calculate specific absorbed fractions using variance reduction techniques and assess the effectiveness of these techniques in improving the efficiency (i.e. reducing the statistical uncertainties) of simulation results in cases where the distance between the source and the target organs is large and/or the target organ is small. The variance reduction techniques of interaction forcing and an ant colony algorithm, which drives the application of splitting and Russian roulette, were applied in Monte Carlo calculations performed with the code penelope for photons with energies from 30 keV to 2 MeV. In the simulations we used a mathematical phantom derived from the well-known MIRD-type adult phantom. The thyroid gland was assumed to be the source organ and urinary bladder, testicles, uterus and ovaries were considered as target organs. Simulations were performed, for each target organ and for photons with different energies, using these variance reduction techniques, all run on the same processor and during a CPU time of 1.5 · 105 s. For energies above 100 keV both interaction forcing and the ant colony method allowed reaching relative uncertainties of the average absorbed dose in the target organs below 4% in all studied cases. When these two techniques were used together, the uncertainty was further reduced, by a factor of 0.5 or less. For photons with energies below 100 keV, an adapted initialization of the ant colony algorithm was required. By using interaction forcing and the ant colony algorithm, realistic values of the specific absorbed fractions can be obtained with relative uncertainties small enough to permit discriminating among simulations performed with different Monte Carlo codes and phantoms. The methodology described in the present work can be employed to calculate specific absorbed fractions for arbitrary arrangements, i.e. energy spectrum of primary radiation, phantom model and source and target organs.
Regression between earthquake magnitudes having errors with known variances
NASA Astrophysics Data System (ADS)
Pujol, Jose
2016-06-01
Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.
MC Estimator Variance Reduction with Antithetic and Common Random Fields
NASA Astrophysics Data System (ADS)
Guthke, P.; Bardossy, A.
2011-12-01
Monte Carlo methods are widely used to estimate the outcome of complex physical models. For physical models with spatial parameter uncertainty, it is common to apply spatial random functions to the uncertain variables, which can then be used to interpolate between known values or to simulate a number of equally likely realizations .The price, that has to be paid for such a stochastic approach, are many simulations of the physical model instead of just running one model with one 'best' input parameter set. The number of simulations is often limited because of computational constraints, so that a modeller has to make a compromise between the benefit in terms of an increased accuracy of the results and the effort in terms of a massively increased computational time. Our objective is, to reduce the estimator variance of dependent variables in Monte Carlo frameworks. Therefore, we adapt two variance reduction techniques (antithetic variates and common random numbers) to a sequential random field simulation scheme that uses copulas as spatial dependence functions. The proposed methodology leads to pairs of spatial random fields with special structural properties, that are advantageous in MC frameworks. Antithetic Random fields (ARF) exhibit a reversed structure on the large scale, while the dependence on the local scale is preserved. Common random fields (CRF) show the same large scale structures, but different spatial dependence on the local scale. The performances of the proposed methods are examined with two typical applications of stochastic hydrogeology. It is shown, that ARF have the property to massively reduce the number of simulation runs required for convergence in Monte Carlo frameworks while keeping the same accuracy in terms of estimator variance. Furthermore, in multi-model frameworks like in sensitivity analysis of the spatial structure, where more than one spatial dependence model is used, the influence of different dependence structures becomes obvious
Reducing sample variance: halo biasing, non-linearity and stochasticity
NASA Astrophysics Data System (ADS)
Gil-Marín, Héctor; Wagner, Christian; Verde, Licia; Jimenez, Raul; Heavens, Alan F.
2010-09-01
Comparing clustering of differently biased tracers of the dark matter distribution offers the opportunity to reduce the sample or cosmic variance error in the measurement of certain cosmological parameters. We develop a formalism that includes bias non-linearities and stochasticity. Our formalism is general enough that it can be used to optimize survey design and tracers selection and optimally split (or combine) tracers to minimize the error on the cosmologically interesting quantities. Our approach generalizes the one presented by McDonald & Seljak of circumventing sample variance in the measurement of f ≡ d lnD/d lna. We analyse how the bias, the noise, the non-linearity and stochasticity affect the measurements of Df and explore in which signal-to-noise regime it is significantly advantageous to split a galaxy sample in two differently biased tracers. We use N-body simulations to find realistic values for the parameters describing the bias properties of dark matter haloes of different masses and their number density. We find that, even if dark matter haloes could be used as tracers and selected in an idealized way, for realistic haloes, the sample variance limit can be reduced only by up to a factor σ2tr/σ1tr ~= 0.6. This would still correspond to the gain from a three times larger survey volume if the two tracers were not to be split. Before any practical application one should bear in mind that these findings apply to dark matter haloes as tracers, while realistic surveys would select galaxies: the galaxy-host halo relation is likely to introduce extra stochasticity, which may reduce the gain further.
Fringe biasing: A variance reduction technique for optically thick meshes
Smedley-Stevenson, R. P.
2013-07-01
Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)
Monte Carlo calculation of specific absorbed fractions: variance reduction techniques.
Díaz-Londoño, G; García-Pareja, S; Salvat, F; Lallena, A M
2015-04-01
The purpose of the present work is to calculate specific absorbed fractions using variance reduction techniques and assess the effectiveness of these techniques in improving the efficiency (i.e. reducing the statistical uncertainties) of simulation results in cases where the distance between the source and the target organs is large and/or the target organ is small. The variance reduction techniques of interaction forcing and an ant colony algorithm, which drives the application of splitting and Russian roulette, were applied in Monte Carlo calculations performed with the code penelope for photons with energies from 30 keV to 2 MeV. In the simulations we used a mathematical phantom derived from the well-known MIRD-type adult phantom. The thyroid gland was assumed to be the source organ and urinary bladder, testicles, uterus and ovaries were considered as target organs. Simulations were performed, for each target organ and for photons with different energies, using these variance reduction techniques, all run on the same processor and during a CPU time of 1.5 · 10(5) s. For energies above 100 keV both interaction forcing and the ant colony method allowed reaching relative uncertainties of the average absorbed dose in the target organs below 4% in all studied cases. When these two techniques were used together, the uncertainty was further reduced, by a factor of 0.5 or less. For photons with energies below 100 keV, an adapted initialization of the ant colony algorithm was required. By using interaction forcing and the ant colony algorithm, realistic values of the specific absorbed fractions can be obtained with relative uncertainties small enough to permit discriminating among simulations performed with different Monte Carlo codes and phantoms. The methodology described in the present work can be employed to calculate specific absorbed fractions for arbitrary arrangements, i.e. energy spectrum of primary radiation, phantom model and source and target organs. PMID
Regression between earthquake magnitudes having errors with known variances
NASA Astrophysics Data System (ADS)
Pujol, Jose
2016-07-01
Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.
Multi-observable Uncertainty Relations in Product Form of Variances
Qin, Hui-Hui; Fei, Shao-Ming; Li-Jost, Xianqing
2016-01-01
We investigate the product form uncertainty relations of variances for n (n ≥ 3) quantum observables. In particular, tight uncertainty relations satisfied by three observables has been derived, which is shown to be better than the ones derived from the strengthened Heisenberg and the generalized Schrödinger uncertainty relations, and some existing uncertainty relation for three spin-half operators. Uncertainty relation of arbitrary number of observables is also derived. As an example, the uncertainty relation satisfied by the eight Gell-Mann matrices is presented. PMID:27498851
Multi-observable Uncertainty Relations in Product Form of Variances.
Qin, Hui-Hui; Fei, Shao-Ming; Li-Jost, Xianqing
2016-01-01
We investigate the product form uncertainty relations of variances for n (n ≥ 3) quantum observables. In particular, tight uncertainty relations satisfied by three observables has been derived, which is shown to be better than the ones derived from the strengthened Heisenberg and the generalized Schrödinger uncertainty relations, and some existing uncertainty relation for three spin-half operators. Uncertainty relation of arbitrary number of observables is also derived. As an example, the uncertainty relation satisfied by the eight Gell-Mann matrices is presented. PMID:27498851
Self-Tuning Continuous-Time Generalized Minimum Variance Control
NASA Astrophysics Data System (ADS)
Hoshino, Ryota; Mori, Yasuchika
The generalized minimum variance control (GMVC) is one of the design methods of self-tuning control (STC). In general, STC is applied as a discrete-time (DT) design technique. However, by some selection of the sampling period, the DT design technique has possibilities of generating unstable zeros and time-delays, and of failing in getting a clear grasp of the controlled object. For this reason, we propose a continuous-time (CT) design technique of GMVC, which we call CGMVC. In this paper, we confirm some advantages of CGMVC, and provide a numerical example.
Simulation Study Using a New Type of Sample Variance
NASA Technical Reports Server (NTRS)
Howe, D. A.; Lainson, K. J.
1996-01-01
We evaluate with simulated data a new type of sample variance for the characterization of frequency stability. The new statistic (referred to as TOTALVAR and its square root TOTALDEV) is a better predictor of long-term frequency variations than the present sample Allan deviation. The statistical model uses the assumption that a time series of phase or frequency differences is wrapped (periodic) with overall frequency difference removed. We find that the variability at long averaging times is reduced considerably for the five models of power-law noise commonly encountered with frequency standards and oscillators.
Analysis of variance tables based on experimental structure.
Brien, C J
1983-03-01
A stepwise procedure for obtaining the experimental structure for a particular experiment is presented together with rules for deriving the analysis-of-variance table from that structure. The procedure involves the division of the factors into groups and is essentially a generalization of the method of Nelder (1965, Proceedings of the Royal Society, Series A 283, 147-162; 1965, Proceedings of the Royal Society, Series A 283, 163-178), to what are termed 'multi-tiered' experiments. The proposed method is illustrated for a wine-tasting experiment. PMID:6871362
Analysis of variance of thematic mapping experiment data.
Rosenfield, G.H.
1981-01-01
As an example of the methodology, data from an experiment using three scales of land-use and land-cover mapping have been analyzed. The binomial proportions of correct interpretations have been analyzed untransformed and transformed by both the arcsine and the logit transformations. A weighted analysis of variance adjustment has been used. There is evidence of a significant difference among the three scales of mapping (1:24 000, 1:100 000 and 1:250 000) using the transformed data. Multiple range tests showed that all three scales are different for the arcsine transformed data. - from Author
AVATAR -- Automatic variance reduction in Monte Carlo calculations
Van Riper, K.A.; Urbatsch, T.J.; Soran, P.D.
1997-05-01
AVATAR{trademark} (Automatic Variance And Time of Analysis Reduction), accessed through the graphical user interface application, Justine{trademark}, is a superset of MCNP{trademark} that automatically invokes THREEDANT{trademark} for a three-dimensional deterministic adjoint calculation on a mesh independent of the Monte Carlo geometry, calculates weight windows, and runs MCNP. Computational efficiency increases by a factor of 2 to 5 for a three-detector oil well logging tool model. Human efficiency increases dramatically, since AVATAR eliminates the need for deep intuition and hours of tedious handwork.
Large-scale magnetic variances near the South Solar Pole
NASA Technical Reports Server (NTRS)
Jokipii, J. R.; Kota, J.; Smith, E.; Horbury, T.; Giacalone, J.
1995-01-01
We summarize recent Ulysses observations of the variances over large temporal scales in the interplanetary magnetic field components and their increase as Ulysses approached the South Solar Pole. A model of these fluctuations is shown to provide a very good fit to the observed amplitude and temporal variation of the fluctuations. In addition, the model predicts that the transport of cosmic rays in the heliosphere will be significantly altered by this level of fluctuations. In addition to altering the inward diffusion and drift access of cosmic rays over the solar poles, we find that the magnetic fluctuations also imply a large latitudinal diffusion, caused primarily by the associated field-line random walk.
Variance and bias computation for enhanced system identification
NASA Technical Reports Server (NTRS)
Bergmann, Martin; Longman, Richard W.; Juang, Jer-Nan
1989-01-01
A study is made of the use of a series of variance and bias confidence criteria recently developed for the eigensystem realization algorithm (ERA) identification technique. The criteria are shown to be very effective, not only for indicating the accuracy of the identification results (especially in terms of confidence intervals), but also for helping the ERA user to obtain better results. They help determine the best sample interval, the true system order, how much data to use and whether to introduce gaps in the data used, what dimension Hankel matrix to use, and how to limit the bias or correct for bias in the estimates.
Variance reduction in Monte Carlo analysis of rarefied gas diffusion
NASA Technical Reports Server (NTRS)
Perlmutter, M.
1972-01-01
The present analysis uses the Monte Carlo method to solve the problem of rarefied diffusion between parallel walls. The diffusing molecules are evaporated or emitted from one of two parallel walls and diffused through another molecular species. The analysis treats the diffusing molecule as undergoing a Markov random walk and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs the expected Markov walk payoff is retained but its variance is reduced so that the M. C. result has a much smaller error.
Analysis and application of minimum variance discrete time system identification
NASA Technical Reports Server (NTRS)
Kotob, S.; Kaufman, H.
1976-01-01
An on-line minimum variance parameter identifier was developed which embodies both accuracy and computational efficiency. The new formulation resulted in a linear estimation problem with both additive and multiplicative noise. The resulting filter is shown to utilize both the covariance of the parameter vector itself and the covariance of the error in identification. It is proven that the identification filter is mean square covergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.
Analysis and application of minimum variance discrete linear system identification
NASA Technical Reports Server (NTRS)
Kotob, S.; Kaufman, H.
1977-01-01
An on-line minimum variance (MV) parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise (AMN). The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean-square convergent and mean-square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.
Two-dimensional finite-element temperature variance analysis
NASA Technical Reports Server (NTRS)
Heuser, J. S.
1972-01-01
The finite element method is extended to thermal analysis by forming a variance analysis of temperature results so that the sensitivity of predicted temperatures to uncertainties in input variables is determined. The temperature fields within a finite number of elements are described in terms of the temperatures of vertices and the variational principle is used to minimize the integral equation describing thermal potential energy. A computer calculation yields the desired solution matrix of predicted temperatures and provides information about initial thermal parameters and their associated errors. Sample calculations show that all predicted temperatures are most effected by temperature values along fixed boundaries; more accurate specifications of these temperatures reduce errors in thermal calculations.
Variance reduction in Monte Carlo analysis of rarefied gas diffusion.
NASA Technical Reports Server (NTRS)
Perlmutter, M.
1972-01-01
The problem of rarefied diffusion between parallel walls is solved using the Monte Carlo method. The diffusing molecules are evaporated or emitted from one of the two parallel walls and diffuse through another molecular species. The Monte Carlo analysis treats the diffusing molecule as undergoing a Markov random walk, and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs, the expected Markov walk payoff is retained but its variance is reduced so that the Monte Carlo result has a much smaller error.
Variances of Cylinder Parameters Fitted to Range Data
Franaszek, Marek
2012-01-01
Industrial pipelines are frequently scanned with 3D imaging systems (e.g., LADAR) and cylinders are fitted to the collected data. Then, the fitted as-built model is compared with the as-designed model. Meaningful comparison between the two models requires estimates of uncertainties of fitted model parameters. In this paper, the formulas for variances of cylinder parameters fitted with Nonlinear Least Squares to a point cloud acquired from one scanning position are derived. Two different error functions used in minimization are discussed: the orthogonal and the directional function. Derived formulas explain how some uncertainty components are propagated from measured ranges to fitted cylinder parameters. PMID:26900527
Multi-observable Uncertainty Relations in Product Form of Variances
NASA Astrophysics Data System (ADS)
Qin, Hui-Hui; Fei, Shao-Ming; Li-Jost, Xianqing
2016-08-01
We investigate the product form uncertainty relations of variances for n (n ≥ 3) quantum observables. In particular, tight uncertainty relations satisfied by three observables has been derived, which is shown to be better than the ones derived from the strengthened Heisenberg and the generalized Schrödinger uncertainty relations, and some existing uncertainty relation for three spin-half operators. Uncertainty relation of arbitrary number of observables is also derived. As an example, the uncertainty relation satisfied by the eight Gell-Mann matrices is presented.
NASA Astrophysics Data System (ADS)
Accomazzi, Alberto; Kurtz, M. J.; Henneken, E. A.; Grant, C. S.; Thompson, D.; Di Milia, G.; Luker, J.; Murray, S. S.
2013-01-01
The NASA Astrophysics Data System (ADS) has been working hard on updating its services and interfaces to better support our community's research needs. ADS Labs is a new interface built on the old tried-and-true ADS Abstract Databases, so all of ADS's content is available through it. In this presentation we highlight the new features that have been developed in ADS Labs over the last year: new recommendations, metrics, a citation tool and enhanced fulltext search. ADS Labs has long been providing article-level recommendations based on keyword similarity, co-readership and co-citation analysis of its corpus. We have now introduced personal recommendations, which provide a list of articles to be considered based on a individual user's readership history. A new metrics interface provides a summary of the basic impact indicators for a list of records. These include the total and normalized number of papers, citations, reads, and downloads. Also included are some of the popular indices such as the h, g and i10 index. The citation helper tool allows one to submit a set of records and obtain a list of top 10 papers which cite and/or are cited by papers in the original list (but which are not in it). The process closely resembles the network approach of establishing "friends of friends" via an analysis of the citation network. The full-text search service now covers more than 2.5 million documents, including all the major astronomy journals, as well as physics journals published by Springer, Elsevier, the American Physical Society, the American Geophysical Union, and all of the arXiv eprints. The full-text search interface interface allows users and librarians to dig deep and find words or phrases in the body of the indexed articles. ADS Labs is available at http://adslabs.org
Wang, Lianqi; Gilles, Luc; Ellerbroek, Brent
2011-06-20
The scientific utility of laser-guide-star-based multiconjugate adaptive optics systems depends upon high sky coverage. Previously we reported a high-fidelity sky coverage analysis of an ad hoc split tomography control algorithm and a postprocessing simulation technique. In this paper, we present the performance of a newer minimum variance split tomography algorithm, and we show that it brings a median improvement at zenith of 21 nm rms optical path difference error over the ad hoc split tomography control algorithm for our system, the Narrow Field Infrared Adaptive Optics System for the Thirty Meter Telescope. In order to make the comparison, we also validated our previously developed sky coverage postprocessing software using an integrated simulation of both high- (laser guide star) and low-order (natural guide star) loops. A new term in the noise model is also identified that improves the performance of both algorithms by more properly regularizing the reconstructor. PMID:21691367
The AdS central charge in string theory
NASA Astrophysics Data System (ADS)
Troost, Jan
2011-11-01
We evaluate the vacuum expectation value of the central charge operator in string theory in an AdS3 vacuum. Our calculation provides a rare non-zero one-point function on a spherical worldsheet. The evaluation involves the regularization both of a worldsheet ultraviolet divergence (associated to the infinite volume of the conformal Killing group), and a space-time infrared divergence (corresponding to the infinite volume of space-time). The two divergences conspire to give a finite result, which is the classical general relativity value for the central charge, corrected in bosonic string theory by an infinite series of tree level higher derivative terms.
Small black holes in global AdS spacetime
NASA Astrophysics Data System (ADS)
Jokela, Niko; Pönni, Arttu; Vuorinen, Aleksi
2016-04-01
We study the properties of two-point functions and quasinormal modes in a strongly coupled field theory holographically dual to a small black hole in global anti-de Sitter spacetime. Our results are seen to smoothly interpolate between known limits corresponding to large black holes and thermal AdS space, demonstrating that the Son-Starinets prescription works even when there is no black hole in the spacetime. Omitting issues related to the internal space, the results can be given a field theory interpretation in terms of the microcanonical ensemble, which provides access to energy densities forbidden in the canonical description.
Entanglement entropy and duality in AdS4
NASA Astrophysics Data System (ADS)
Bakas, Ioannis; Pastras, Georgios
2015-07-01
Small variations of the entanglement entropy δS and the expectation value of the modular Hamiltonian δE are computed holographically for circular entangling curves in the boundary of AdS4, using gravitational perturbations with general boundary conditions in spherical coordinates. Agreement with the first law of thermodynamics, δS = δE, requires that the line element of the entangling curve remains constant. In this context, we also find a manifestation of electric-magnetic duality for the entanglement entropy and the corresponding modular Hamiltonian, following from the holographic energy-momentum/Cotton tensor duality.
NASA Astrophysics Data System (ADS)
Belin, Alexandre; Castro, Alejandra; Hung, Ling-Yan
2015-11-01
We discuss properties of interpolating geometries in three dimensional gravity in the presence of a chiral anomaly. This anomaly, which introduces an unbalance between left and right central charges, is protected under RG flows. For this simple reason it is impossible to gap a system with such an anomaly. Our goal is to discuss how holography captures this basic and robust feature. We demonstrate the absence of a mass gap by analysing the linearized spectrum and holographic entanglement entropy of these backgrounds in the context of AdS3/CFT2.
Pure Spinors in AdS and Lie Algebra Cohomology
NASA Astrophysics Data System (ADS)
Mikhailov, Andrei
2014-10-01
We show that the BRST cohomology of the massless sector of the Type IIB superstring on AdS5 × S 5 can be described as the relative cohomology of an infinite-dimensional Lie superalgebra. We explain how the vertex operators of ghost number 1, which correspond to conserved currents, are described in this language. We also give some algebraic description of the ghost number 2 vertices, which appears to be new. We use this algebraic description to clarify the structure of the zero mode sector of the ghost number two states in flat space, and initiate the study of the vertices of the higher ghost number.
Internal structure of charged AdS black holes
NASA Astrophysics Data System (ADS)
Bhattacharjee, Srijit; Sarkar, Sudipta; Virmani, Amitabh
2016-06-01
When an electrically charged black hole is perturbed, its inner horizon becomes a singularity, often referred to as the Poisson-Israel mass inflation singularity. Ori constructed a model of this phenomenon for asymptotically flat black holes, in which the metric can be determined explicitly in the mass inflation region. In this paper we implement the Ori model for charged AdS black holes. We find that the mass function inflates faster than the flat space case as the inner horizon is approached. Nevertheless, the mass inflation singularity is still a weak singularity: Although spacetime curvature becomes infinite, tidal distortions remain finite on physical objects attempting to cross it.
NASA Astrophysics Data System (ADS)
Accomazzi, Alberto; Kurtz, M. J.; Henneken, E. A.; Grant, C. S.; Thompson, D.; Luker, J.; Chyla, R.; Murray, S. S.
2014-01-01
In the spring of 1993, the Smithsonian/NASA Astrophysics Data System (ADS) first launched its bibliographic search system. It was known then as the ADS Abstract Service, a component of the larger Astrophysics Data System effort which had developed an interoperable data system now seen as a precursor of the Virtual Observatory. As a result of the massive technological and sociological changes in the field of scholarly communication, the ADS is now completing the most ambitious technological upgrade in its twenty-year history. Code-named ADS 2.0, the new system features: an IT platform built on web and digital library standards; a new, extensible, industrial strength search engine; a public API with various access control capabilities; a set of applications supporting search, export, visualization, analysis; a collaborative, open source development model; and enhanced indexing of content which includes the full-text of astronomy and physics publications. The changes in the ADS platform affect all aspects of the system and its operations, including: the process through which data and metadata are harvested, curated and indexed; the interface and paradigm used for searching the database; and the follow-up analysis capabilities available to the users. This poster describes the choices behind the technical overhaul of the system, the technology stack used, and the opportunities which the upgrade is providing us with, namely gains in productivity and enhancements in our system capabilities.
Nowak, J; Hagerman, I; Ylén, M; Nyquist, O; Sylvén, C
1993-09-01
Variance electrocardiography (variance ECG) is a new resting procedure for detection of coronary artery disease (CAD). The method measures variability in the electrical expression of the depolarization phase induced by this disease. The time-domain analysis is performed on 220 cardiac cycles using high-fidelity ECG signals from 24 leads, and the phase-locked temporal electrical heterogeneity is expressed as a nondimensional CAD index (CAD-I) with the values of 0-150. This study compares the diagnostic efficiency of variance ECG and exercise stress test in a high prevalence population. A total of 199 symptomatic patients evaluated with coronary angiography was subjected to variance ECG and exercise test on a bicycle ergometer as a continuous ramp. The discriminant accuracy of the two methods was assessed employing the receiver operating characteristic curves constructed by successive consideration of several CAD-I cutpoint values and various threshold criteria based on ST-segment depression exclusively or in combination with exertional chest pain. Of these patients, 175 with CAD (> or = 50% luminal stenosis in 1 + major epicardial arteries) presented a mean CAD-I of 88 +/- 22, compared with 70 +/- 21 in 24 nonaffected patients (p < 0.01). Variance ECG provided a stochastically significant discrimination (p < 0.01) which was matched by exercise test only when chest pain variable was added to ST-segment depression as a discriminating criterion. Even then, the exercise test diagnosed single-vessel disease with a significantly lower sensitivity. At a cutpoint of CAD-I > or = 70, compared with ST-segment depression > or = 1 mm combined with exertional chest pain, the overall sensitivity of variance ECG was significantly higher (p < 0.01) than that of exercise test (79 vs. 48%). When combined, the two methods identified 93% of coronary angiography positive cases. Variance ECG is an efficient diagnostic method which compares favorably with exercise test for detection of
Recognition by variance: learning rules for spatiotemporal patterns.
Barak, Omri; Tsodyks, Misha
2006-10-01
Recognizing specific spatiotemporal patterns of activity, which take place at timescales much larger than the synaptic transmission and membrane time constants, is a demand from the nervous system exemplified, for instance, by auditory processing. We consider the total synaptic input that a single readout neuron receives on presentation of spatiotemporal spiking input patterns. Relying on the monotonic relation between the mean and the variance of a neuron's input current and its spiking output, we derive learning rules that increase the variance of the input current evoked by learned patterns relative to that obtained from random background patterns. We demonstrate that the model can successfully recognize a large number of patterns and exhibits a slow deterioration in performance with increasing number of learned patterns. In addition, robustness to time warping of the input patterns is revealed to be an emergent property of the model. Using a leaky integrate-and-fire realization of the readout neuron, we demonstrate that the above results also apply when considering spiking output. PMID:16907629
Influence of genetic variance on sodium sensitivity of blood pressure.
Luft, F C; Miller, J Z; Weinberger, M H; Grim, C E; Daugherty, S A; Christian, J C
1987-02-01
To examine the effect of genetic variance on blood pressure, sodium homeostasis, and its regulatory determinants, we studied 37 pairs of monozygotic twins and 18 pairs of dizygotic twins under conditions of volume expansion and contraction. We found that, in addition to blood pressure and body size, sodium excretion in response to provocative maneuvers, glomerular filtration rate, the renin-angiotensin system, and the sympathetic nervous system are influenced by genetic variance. To elucidate the interaction of genetic factors and an environmental influence, namely, salt intake, we restricted dietary sodium in 44 families of twin children. In addition to a modest decrease in blood pressure, we found heterogeneous responses in blood pressure indicative of sodium sensitivity and resistance which were normally distributed. Strong parent-offspring resemblances were found in baseline blood pressures which persisted when adjustments were made for age and weight. Further, mother-offspring resemblances were observed in the change in blood pressure with sodium restriction. We conclude that the control of sodium homeostasis is heritable and that the change in blood pressure with sodium restriction is familial as well. These data speak to the interaction between the genetic susceptibility to hypertension and environmental influences which may result in its expression. PMID:3553721
[The correlations between psychological indices and cardiac variance].
Nikolova, R; Danev, S; Amudzhev, P; Datsov, E
1995-01-01
Correlative links between psychologic and psychologic indicators were studied in subjects occupied either in airline transportation or in the chemical industry. Investigations covered three groups of persons: managers of airline traffic (57 subjects); workers at "Vratsa" Chemical plant (14 subjects); and operators at "Vratsa" Chemical plant (14 subjects). The psychologic parameters measured included indicators of cardiac variance: mean--mean value of successive cardiac intervals, SD--standard deviation of mean value of cardiac intervals (R-R), AMo--amplitude of the mode, HI--homeostatic index, Pt--spectral power of R-R related to thermoregulation, Pp--spectral power of R-R related to respiration, IBO--index of centralization; psychologic parameters included: extrovertiveness, introvertiveness, neuroticism, psychoticism, interpersonality conflicts, self-control, social support, self-confidence, work satisfaction, psychosomatic complaints. There was evidence of significant and highly significant correlative links between indicators of cardiac variance and psychologic indicators. There thus appeared to exist certain relationships between the psychologic and psychologic levels during lengthy stressful occupational exposure. PMID:8524754
Cosmic variance of the spectral index from mode coupling
Bramante, Joseph; Kumar, Jason; Nelson, Elliot; Shandera, Sarah E-mail: jkumar@hawaii.edu E-mail: shandera@gravity.psu.edu
2013-11-01
We demonstrate that local, scale-dependent non-Gaussianity can generate cosmic variance uncertainty in the observed spectral index of primordial curvature perturbations. In a universe much larger than our current Hubble volume, locally unobservable long wavelength modes can induce a scale-dependence in the power spectrum of typical subvolumes, so that the observed spectral index varies at a cosmologically significant level (|Δn{sub s}| ∼ O(0.04)). Similarly, we show that the observed bispectrum can have an induced scale dependence that varies about the global shape. If tensor modes are coupled to long wavelength modes of a second field, the locally observed tensor power and spectral index can also vary. All of these effects, which can be introduced in models where the observed non-Gaussianity is consistent with bounds from the Planck satellite, loosen the constraints that observations place on the parameters of theories of inflation with mode coupling. We suggest observational constraints that future measurements could aim for to close this window of cosmic variance uncertainty.
Concentration variance decay during magma mixing: a volcanic chronometer
NASA Astrophysics Data System (ADS)
Perugini, Diego; de Campos, Cristina P.; Petrelli, Maurizio; Dingwell, Donald B.
2015-09-01
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical “mixing to eruption” time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest.
PET image reconstruction: mean, variance, and optimal minimax criterion
NASA Astrophysics Data System (ADS)
Liu, Huafeng; Gao, Fei; Guo, Min; Xue, Liying; Nie, Jing; Shi, Pengcheng
2015-04-01
Given the noise nature of positron emission tomography (PET) measurements, it is critical to know the image quality and reliability as well as expected radioactivity map (mean image) for both qualitative interpretation and quantitative analysis. While existing efforts have often been devoted to providing only the reconstructed mean image, we present a unified framework for joint estimation of the mean and corresponding variance of the radioactivity map based on an efficient optimal min-max criterion. The proposed framework formulates the PET image reconstruction problem to be a transformation from system uncertainties to estimation errors, where the minimax criterion is adopted to minimize the estimation errors with possibly maximized system uncertainties. The estimation errors, in the form of a covariance matrix, express the measurement uncertainties in a complete way. The framework is then optimized by ∞-norm optimization and solved with the corresponding H∞ filter. Unlike conventional statistical reconstruction algorithms, that rely on the statistical modeling methods of the measurement data or noise, the proposed joint estimation stands from the point of view of signal energies and can handle from imperfect statistical assumptions to even no a priori statistical assumptions. The performance and accuracy of reconstructed mean and variance images are validated using Monte Carlo simulations. Experiments on phantom scans with a small animal PET scanner and real patient scans are also conducted for assessment of clinical potential.
Wavelet-Variance-Based Estimation for Composite Stochastic Processes.
Guerrier, Stéphane; Skaloud, Jan; Stebler, Yannick; Victoria-Feser, Maria-Pia
2013-09-01
This article presents a new estimation method for the parameters of a time series model. We consider here composite Gaussian processes that are the sum of independent Gaussian processes which, in turn, explain an important aspect of the time series, as is the case in engineering and natural sciences. The proposed estimation method offers an alternative to classical estimation based on the likelihood, that is straightforward to implement and often the only feasible estimation method with complex models. The estimator furnishes results as the optimization of a criterion based on a standardized distance between the sample wavelet variances (WV) estimates and the model-based WV. Indeed, the WV provides a decomposition of the variance process through different scales, so that they contain the information about different features of the stochastic model. We derive the asymptotic properties of the proposed estimator for inference and perform a simulation study to compare our estimator to the MLE and the LSE with different models. We also set sufficient conditions on composite models for our estimator to be consistent, that are easy to verify. We use the new estimator to estimate the stochastic error's parameters of the sum of three first order Gauss-Markov processes by means of a sample of over 800,000 issued from gyroscopes that compose inertial navigation systems. Supplementary materials for this article are available online. PMID:24174689
Wavelet-Variance-Based Estimation for Composite Stochastic Processes
Guerrier, Stéphane; Skaloud, Jan; Stebler, Yannick; Victoria-Feser, Maria-Pia
2013-01-01
This article presents a new estimation method for the parameters of a time series model. We consider here composite Gaussian processes that are the sum of independent Gaussian processes which, in turn, explain an important aspect of the time series, as is the case in engineering and natural sciences. The proposed estimation method offers an alternative to classical estimation based on the likelihood, that is straightforward to implement and often the only feasible estimation method with complex models. The estimator furnishes results as the optimization of a criterion based on a standardized distance between the sample wavelet variances (WV) estimates and the model-based WV. Indeed, the WV provides a decomposition of the variance process through different scales, so that they contain the information about different features of the stochastic model. We derive the asymptotic properties of the proposed estimator for inference and perform a simulation study to compare our estimator to the MLE and the LSE with different models. We also set sufficient conditions on composite models for our estimator to be consistent, that are easy to verify. We use the new estimator to estimate the stochastic error's parameters of the sum of three first order Gauss-Markov processes by means of a sample of over 800,000 issued from gyroscopes that compose inertial navigation systems. Supplementary materials for this article are available online. PMID:24174689
Minimum variance brain source localization for short data sequences.
Ravan, Maryam; Reilly, James P; Hasey, Gary
2014-02-01
In the electroencephalogram (EEG) or magnetoencephalogram (MEG) context, brain source localization methods that rely on estimating second-order statistics often fail when the number of samples of the recorded data sequences is small in comparison to the number of electrodes. This condition is particularly relevant when measuring evoked potentials. Due to the correlated background EEG/MEG signal, an adaptive approach to localization is desirable. Previous work has addressed these issues by reducing the adaptive degrees of freedom (DoFs). This reduction results in decreased resolution and accuracy of the estimated source configuration. This paper develops and tests a new multistage adaptive processing technique based on the minimum variance beamformer for brain source localization that has been previously used in the radar statistical signal processing context. This processing, referred to as the fast fully adaptive (FFA) approach, can significantly reduce the required sample support, while still preserving all available DoFs. To demonstrate the performance of the FFA approach in the limited data scenario, simulation and experimental results are compared with two previous beamforming approaches; i.e., the fully adaptive minimum variance beamforming method and the beamspace beamforming method. Both simulation and experimental results demonstrate that the FFA method can localize all types of brain activity more accurately than the other approaches with limited data. PMID:24108457
Irreversible Langevin samplers and variance reduction: a large deviations approach
NASA Astrophysics Data System (ADS)
Rey-Bellet, Luc; Spiliopoulos, Konstantinos
2015-07-01
In order to sample from a given target distribution (often of Gibbs type), the Monte Carlo Markov chain method consists of constructing an ergodic Markov process whose invariant measure is the target distribution. By sampling the Markov process one can then compute, approximately, expectations of observables with respect to the target distribution. Often the Markov processes used in practice are time-reversible (i.e. they satisfy detailed balance), but our main goal here is to assess and quantify how the addition of a non-reversible part to the process can be used to improve the sampling properties. We focus on the diffusion setting (overdamped Langevin equations) where the drift consists of a gradient vector field as well as another drift which breaks the reversibility of the process but is chosen to preserve the Gibbs measure. In this paper we use the large deviation rate function for the empirical measure as a tool to analyze the speed of convergence to the invariant measure. We show that the addition of an irreversible drift leads to a larger rate function and it strictly improves the speed of convergence of ergodic average for (generic smooth) observables. We also deduce from this result that the asymptotic variance decreases under the addition of the irreversible drift and we give an explicit characterization of the observables whose variance is not reduced reduced, in terms of a nonlinear Poisson equation. Our theoretical results are illustrated and supplemented by numerical simulations.
Cosmic variance of the spectral index from mode coupling
NASA Astrophysics Data System (ADS)
Bramante, Joseph; Kumar, Jason; Nelson, Elliot; Shandera, Sarah
2013-11-01
We demonstrate that local, scale-dependent non-Gaussianity can generate cosmic variance uncertainty in the observed spectral index of primordial curvature perturbations. In a universe much larger than our current Hubble volume, locally unobservable long wavelength modes can induce a scale-dependence in the power spectrum of typical subvolumes, so that the observed spectral index varies at a cosmologically significant level (|Δns| ~ Script O(0.04)). Similarly, we show that the observed bispectrum can have an induced scale dependence that varies about the global shape. If tensor modes are coupled to long wavelength modes of a second field, the locally observed tensor power and spectral index can also vary. All of these effects, which can be introduced in models where the observed non-Gaussianity is consistent with bounds from the Planck satellite, loosen the constraints that observations place on the parameters of theories of inflation with mode coupling. We suggest observational constraints that future measurements could aim for to close this window of cosmic variance uncertainty.
Concentration variance decay during magma mixing: a volcanic chronometer
Perugini, Diego; De Campos, Cristina P.; Petrelli, Maurizio; Dingwell, Donald B.
2015-01-01
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing – a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical “mixing to eruption” time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest. PMID:26387555
Hydraulic geometry of river cross sections; theory of minimum variance
Williams, Garnett P.
1978-01-01
This study deals with the rates at which mean velocity, mean depth, and water-surface width increase with water discharge at a cross section on an alluvial stream. Such relations often follow power laws, the exponents in which are called hydraulic exponents. The Langbein (1964) minimum-variance theory is examined in regard to its validity and its ability to predict observed hydraulic exponents. The variables used with the theory were velocity, depth, width, bed shear stress, friction factor, slope (energy gradient), and stream power. Slope is often constant, in which case only velocity, depth, width, shear and friction factor need be considered. The theory was tested against a wide range of field data from various geographic areas of the United States. The original theory was intended to produce only the average hydraulic exponents for a group of cross sections in a similar type of geologic or hydraulic environment. The theory does predict these average exponents with a reasonable degree of accuracy. An attempt to forecast the exponents at any selected cross section was moderately successful. Empirical equations are more accurate than the minimum variance, Gauckler-Manning, or Chezy methods. Predictions of the exponent of width are most reliable, the exponent of depth fair, and the exponent of mean velocity poor. (Woodard-USGS)
Stochastic Mixing Model with Power Law Decay of Variance
NASA Technical Reports Server (NTRS)
Fedotov, S.; Ihme, M.; Pitsch, H.
2003-01-01
Here we present a simple stochastic mixing model based on the law of large numbers (LLN). The reason why the LLN is involved in our formulation of the mixing problem is that the random conserved scalar c = c(t,x(t)) appears to behave as a sample mean. It converges to the mean value mu, while the variance sigma(sup 2)(sub c) (t) decays approximately as t(exp -1). Since the variance of the scalar decays faster than a sample mean (typically is greater than unity), we will introduce some non-linear modifications into the corresponding pdf-equation. The main idea is to develop a robust model which is independent from restrictive assumptions about the shape of the pdf. The remainder of this paper is organized as follows. In Section 2 we derive the integral equation from a stochastic difference equation describing the evolution of the pdf of a passive scalar in time. The stochastic difference equation introduces an exchange rate gamma(sub n) which we model in a first step as a deterministic function. In a second step, we generalize gamma(sub n) as a stochastic variable taking fluctuations in the inhomogeneous environment into account. In Section 3 we solve the non-linear integral equation numerically and analyze the influence of the different parameters on the decay rate. The paper finishes with a conclusion.
Implications and applications of the variance-based uncertainty equalities
NASA Astrophysics Data System (ADS)
Yao, Yao; Xiao, Xing; Wang, Xiaoguang; Sun, C. P.
2015-06-01
In quantum mechanics, the variance-based Heisenberg-type uncertainty relations are a series of mathematical inequalities posing the fundamental limits on the achievable accuracy of the state preparations. In contrast, we construct and formulate two quantum uncertainty equalities, which hold for all pairs of incompatible observables and indicate the new uncertainty relations recently introduced by L. Maccone and A. K. Pati [Phys. Rev. Lett. 113, 260401 (2014), 10.1103/PhysRevLett.113.260401]. In fact, we obtain a series of inequalities with hierarchical structure, including the Maccone-Pati's inequalities as a special (weakest) case. Furthermore, we present an explicit interpretation lying behind the derivations and relate these relations to the so-called intelligent states. As an illustration, we investigate the properties of these uncertainty inequalities in the qubit system and a state-independent bound is obtained for the sum of variances. Finally, we apply these inequalities to the spin squeezing scenario and its implication in interferometric sensitivity is also discussed.
Variance of the Quantum Dwell Time for a Nonrelativistic Particle
NASA Technical Reports Server (NTRS)
Hahne, Gerhard
2012-01-01
Munoz, Seidel, and Muga [Phys. Rev. A 79, 012108 (2009)], following an earlier proposal by Pollak and Miller [Phys. Rev. Lett. 53, 115 (1984)] in the context of a theory of a collinear chemical reaction, showed that suitable moments of a two-flux correlation function could be manipulated to yield expressions for the mean quantum dwell time and mean square quantum dwell time for a structureless particle scattering from a time-independent potential energy field between two parallel lines in a two-dimensional spacetime. The present work proposes a generalization to a charged, nonrelativistic particle scattering from a transient, spatially confined electromagnetic vector potential in four-dimensional spacetime. The geometry of the spacetime domain is that of the slab between a pair of parallel planes, in particular those defined by constant values of the third (z) spatial coordinate. The mean Nth power, N = 1, 2, 3, . . ., of the quantum dwell time in the slab is given by an expression involving an N-flux-correlation function. All these means are shown to be nonnegative. The N = 1 formula reduces to an S-matrix result published previously [G. E. Hahne, J. Phys. A 36, 7149 (2003)]; an explicit formula for N = 2, and of the variance of the dwell time in terms of the S-matrix, is worked out. A formula representing an incommensurability principle between variances of the output-minus-input flux of a pair of dynamical variables (such as the particle s time flux and others) is derived.
Cosmic Variance in the Nanohertz Gravitational Wave Background
NASA Astrophysics Data System (ADS)
Roebber, Elinore; Holder, Gilbert; Holz, Daniel E.; Warren, Michael
2016-03-01
We use large N-body simulations and empirical scaling relations between dark matter halos, galaxies, and supermassive black holes (SMBBHs) to estimate the formation rates of SMBBH binaries and the resulting low-frequency stochastic gravitational wave background (GWB). We find this GWB to be relatively insensitive (≲ 10%) to cosmological parameters, with only slight variation between wmap5 and Planck cosmologies. We find that uncertainty in the astrophysical scaling relations changes the amplitude of the GWB by a factor of ∼2. Current observational limits are already constraining this predicted range of models. We investigate the Poisson variance in the amplitude of the GWB for randomly generated populations of SMBBHs, finding a scatter of order unity per frequency bin below 10 nHz, and increasing to a factor of ∼10 near 100 nHz. This variance is a result of the rarity of the most massive binaries, which dominate the signal, and acts as a fundamental uncertainty on the amplitude of the underlying power law spectrum. This Poisson uncertainty dominates at ≳ 20 nHz, while at lower frequencies the dominant uncertainty is related to our poor understanding of the astrophysical scaling relations, although very low frequencies may be dominated by uncertainties related to the final parsec problem and the processes which drive binaries to the gravitational wave dominated regime. Cosmological effects are negligible at all frequencies.
VARIANCE ESTIMATION IN DOMAIN DECOMPOSED MONTE CARLO EIGENVALUE CALCULATIONS
Mervin, Brenden T; Maldonado, G. Ivan; Mosher, Scott W; Evans, Thomas M; Wagner, John C
2012-01-01
The number of tallies performed in a given Monte Carlo calculation is limited in most modern Monte Carlo codes by the amount of memory that can be allocated on a single processor. By using domain decomposition, the calculation is now limited by the total amount of memory available on all processors, allowing for significantly more tallies to be performed. However, decomposing the problem geometry introduces significant issues with the way tally statistics are conventionally calculated. In order to deal with the issue of calculating tally variances in domain decomposed environments for the Shift hybrid Monte Carlo code, this paper presents an alternative approach for reactor scenarios in which an assumption is made that once a particle leaves a domain, it does not reenter the domain. Particles that reenter the domain are instead treated as separate independent histories. This assumption introduces a bias that inevitably leads to under-prediction of the calculated variances for tallies within a few mean free paths of the domain boundaries. However, through the use of different decomposition strategies, primarily overlapping domains, the negative effects of such an assumption can be significantly reduced to within reasonable levels.
Concentration variance decay during magma mixing: a volcanic chronometer.
Perugini, Diego; De Campos, Cristina P; Petrelli, Maurizio; Dingwell, Donald B
2015-01-01
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical "mixing to eruption" time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest. PMID:26387555
Variational Study of SU(3) Gauge Theory by Stationary Variance
NASA Astrophysics Data System (ADS)
Siringo, Fabio
2015-07-01
The principle of stationary variance is advocated as a viable variational approach to gauge theories. The method can be regarded as a second-order extension of the Gaussian Effective Potential (GEP) and seems to be suited for describing the strong-coupling limit of non-Abelian gauge theories. The single variational parameter of the GEP is replaced by trial unknown two-point functions, with infinite variational parameters to be optimized by the solution of a set of coupled integral equations. The stationary conditions can be easily derived by the self-energy, without having to write the effective potential, making use of a general relation between self-energy and functional derivatives that has been proven to any order. The low- energy limit of pure Yang-Mills SU(3) gauge theory has been studied in Feynman gauge, and the stationary equations are written as integral equations for the gluon and ghost propagators. A physically sensible solution is found for any strength of the coupling. The gluon propagator is finite in the infrared, with a dynamical mass that decreases as a power at high energies. At variance with some recent findings in Feynman gauge, the ghost dressing function does not vanish in the infrared limit and a decoupling scenario emerges as recently reported for the Landau gauge.
Argentine Population Genetic Structure: Large Variance in Amerindian Contribution
Seldin, Michael F.; Tian, Chao; Shigeta, Russell; Scherbarth, Hugo R.; Silva, Gabriel; Belmont, John W.; Kittles, Rick; Gamron, Susana; Allevi, Alberto; Palatnik, Simon A.; Alvarellos, Alejandro; Paira, Sergio; Caprarulo, Cesar; Guillerón, Carolina; Catoggio, Luis J.; Prigione, Cristina; Berbotto, Guillermo A.; García, Mercedes A.; Perandones, Carlos E.; Pons-Estel, Bernardo A.; Alarcon-Riquelme, Marta E.
2011-01-01
Argentine population genetic structure was examined using a set of 78 ancestry informative markers (AIMs) to assess the contributions of European, Amerindian, and African ancestry in 94 individuals members of this population. Using the Bayesian clustering algorithm STRUCTURE, the mean European contribution was 78%, the Amerindian contribution was 19.4%, and the African contribution was 2.5%. Similar results were found using weighted least mean square method: European, 80.2%; Amerindian, 18.1%; and African, 1.7%. Consistent with previous studies the current results showed very few individuals (four of 94) with greater than 10% African admixture. Notably, when individual admixture was examined, the Amerindian and European admixture showed a very large variance and individual Amerindian contribution ranged from 1.5 to 84.5% in the 94 individual Argentine subjects. These results indicate that admixture must be considered when clinical epidemiology or case control genetic analyses are studied in this population. Moreover, the current study provides a set of informative SNPs that can be used to ascertain or control for this potentially hidden stratification. In addition, the large variance in admixture proportions in individual Argentine subjects shown by this study suggests that this population is appropriate for future admixture mapping studies. PMID:17177183
Concentration variance decay during magma mixing: a volcanic chronometer
NASA Astrophysics Data System (ADS)
Perugini, D.; De Campos, C. P.; Petrelli, M.; Dingwell, D. B.
2015-12-01
The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical "mixing to eruption" time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest.
Critical gravity on AdS2 spacetimes
NASA Astrophysics Data System (ADS)
Myung, Yun Soo; Kim, Yong-Wan; Park, Young-Jai
2011-09-01
We study the critical gravity in two-dimensional anti-de Sitter (AdS2) spacetimes, which was obtained from the cosmological topologically massive gravity (TMGΛ) in three dimensions by using the Kaluza-Klein dimensional reduction. We perform the perturbation analysis around AdS2, which may correspond to the near-horizon geometry of the extremal Banados, Teitelboim, and Zanelli (BTZ) black hole obtained from the TMGΛ with identification upon uplifting three dimensions. A massive propagating scalar mode δF satisfies the second-order differential equation away from the critical point of K=l, whose solution is given by the Bessel functions. On the other hand, δF satisfies the fourth-order equation at the critical point. We exactly solve the fourth-order equation, and compare it with the log gravity in two dimensions. Consequently, the critical gravity in two dimensions could not be described by a massless scalar δFml and its logarithmic partner δFlog4th.
Conserved charges in timelike warped AdS3 spaces
NASA Astrophysics Data System (ADS)
Donnay, L.; Fernández-Melgarejo, J. J.; Giribet, G.; Goya, A.; Lavia, E.
2015-06-01
We consider the timelike version of warped anti-de Sitter space (WAdS), which corresponds to the three-dimensional section of the Gödel solution of four-dimensional cosmological Einstein equations. This geometry presents closed timelike curves (CTCs), which are inherited from its four-dimensional embedding. In three dimensions, this type of solution can be supported without matter provided the graviton acquires mass. Here, among the different ways to consistently give mass to the graviton in three dimensions, we consider the parity-even model known as new massive gravity (NMG). In the bulk of timelike WAdS3 space, we introduce defects that, from the three-dimensional point of view, represent spinning massive particlelike objects. For this type of source, we investigate the definition of quasilocal gravitational energy as seen from infinity, far beyond the region where the CTCs appear. We also consider the covariant formalism applied to NMG to compute the mass and the angular momentum of spinning particlelike defects and compare the result with the one obtained by means of the quasilocal stress tensor. We apply these methods to special limits in which the WAdS3 solutions coincide with locally AdS3 and locally AdS2×R spaces. Finally, we make some comments about the asymptotic symmetry algebra of asymptotically WAdS3 spaces in NMG.
Primordial fluctuations from complex AdS saddle points
NASA Astrophysics Data System (ADS)
Hertog, Thomas; van der Woerd, Ellen
2016-02-01
One proposal for dS/CFT is that the Hartle-Hawking (HH) wave function in the large volume limit is equal to the partition function of a Euclidean CFT deformed by various operators. All saddle points defining the semiclassical HH wave function in cosmology have a representation in which their interior geometry is part of a Euclidean AdS domain wall with complex matter fields. We compute the wave functions of scalar and tensor perturbations around homogeneous isotropic complex saddle points, turning on single scalar field matter only. We compare their predictions for the spectra of CMB perturbations with those of a different dS/CFT proposal based on the analytic continuation of inflationary universes to real asymptotically AdS domain walls. We find the predictions of both bulk calculations agree to first order in the slow roll parameters, but there is a difference at higher order which, we argue, is a signature of the HH state of the fluctuations.
Greig, Jenny A; Buckley, Suzanne Mk; Waddington, Simon N; Parker, Alan L; Bhella, David; Pink, Rebecca; Rahim, Ahad A; Morita, Takashi; Nicklin, Stuart A; McVey, John H; Baker, Andrew H
2009-10-01
The binding of coagulation factor X (FX) to the hexon of adenovirus (Ad) 5 is pivotal for hepatocyte transduction. However, vectors based on Ad35, a subspecies B Ad, are in development for cancer gene therapy, as Ad35 utilizes CD46 (which is upregulated in many cancers) for transduction. We investigated whether interaction of Ad35 with FX influenced vector tropism using Ad5, Ad35, and Ad5/Ad35 chimeras: Ad5/fiber(f)35, Ad5/penton(p)35/f35, and Ad35/f5. Surface plasmon resonance (SPR) revealed that Ad35 and Ad35/f5 bound FX with approximately tenfold lower affinities than Ad5 hexon-containing viruses, and electron cryomicroscopy (cryo-EM) demonstrated a direct Ad35 hexon:FX interaction. The presence of physiological levels of FX significantly inhibited transduction of vectors containing Ad35 fibers (Ad5/f35, Ad5/p35/f35, and Ad35) in CD46-positive cells. Vectors were intravenously administered to CD46 transgenic mice in the presence and absence of FX-binding protein (X-bp), resulting in reduced liver accumulation for all vectors. Moreover, Ad5/f35 and Ad5/p35/f35 efficiently accumulated in the lung, whereas Ad5 demonstrated poor lung targeting. Additionally, X-bp significantly reduced lung genome accumulation for Ad5/f35 and Ad5/p35/f35, whereas Ad35 was significantly enhanced. In summary, vectors based on the full Ad35 serotype will be useful vectors for selective gene transfer via CD46 due to a weaker FX interaction compared to Ad5. PMID:19603000
Constructing Dense Graphs with Unique Hamiltonian Cycles
ERIC Educational Resources Information Center
Lynch, Mark A. M.
2012-01-01
It is not difficult to construct dense graphs containing Hamiltonian cycles, but it is difficult to generate dense graphs that are guaranteed to contain a unique Hamiltonian cycle. This article presents an algorithm for generating arbitrarily large simple graphs containing "unique" Hamiltonian cycles. These graphs can be turned into dense graphs…
Teaching and Learning with Individually Unique Exercises
ERIC Educational Resources Information Center
Joerding, Wayne
2010-01-01
In this article, the author describes the pedagogical benefits of giving students individually unique homework exercises from an exercise template. Evidence from a test of this approach shows statistically significant improvements in subsequent exam performance by students receiving unique problems compared with students who received traditional…
40 CFR 142.301 - What is a small system variance?
Code of Federal Regulations, 2010 CFR
2010-07-01
....301 Section 142.301 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System... issuance of variances from the requirement to comply with a maximum contaminant level or...
40 CFR 142.301 - What is a small system variance?
Code of Federal Regulations, 2014 CFR
2014-07-01
....301 Section 142.301 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System... issuance of variances from the requirement to comply with a maximum contaminant level or...
40 CFR 142.301 - What is a small system variance?
Code of Federal Regulations, 2012 CFR
2012-07-01
....301 Section 142.301 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System... issuance of variances from the requirement to comply with a maximum contaminant level or...
40 CFR 142.301 - What is a small system variance?
Code of Federal Regulations, 2013 CFR
2013-07-01
....301 Section 142.301 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System... issuance of variances from the requirement to comply with a maximum contaminant level or...
40 CFR 142.301 - What is a small system variance?
Code of Federal Regulations, 2011 CFR
2011-07-01
....301 Section 142.301 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System... issuance of variances from the requirement to comply with a maximum contaminant level or...
Chambers, D W
1998-01-01
Dentists and many staff enjoy characteristics of work associated with high levels of satisfaction and performance. Although value can be added to oral health care professionals' jobs through enlargement, enrichment, rotations, and autonomous work groups, there are limits to these techniques. Controlling work performance by means of rewards is risky. Probably the most effective means of adding value to jobs is through the Quality of Work Life approach, concentrating on job design and placement to make work meaningful and autonomous and to provide feedback. PMID:9697373
Understanding the influence of watershed storage caused by human interferences on ET variance
NASA Astrophysics Data System (ADS)
Zeng, R.; Cai, X.
2014-12-01
Understanding the temporal variance of evapotranspiration (ET) at the watershed scale remains a challenging task, because it is affected by complex climate conditions, soil properties, vegetation, groundwater and human activities. In a changing environment with extensive and intensive human interferences, understanding ET variance and its factors is important for sustainable water resources management. This study presents an analysis of the effect of storage change caused by human activities on ET variance Irrigation usually filters ET variance through the use of surface and groundwater; however, over-amount irrigation may cause the depletion of watershed storage, which changes the coincidence of water availability and energy supply for ET. This study develops a framework by incorporating the water balance and the Budyko Hypothesis. It decomposes the ET variance to the variances of precipitation, potential ET, catchment storage change, and their covariances. The contributions of ET variance from the various components are scaled by some weighting functions, expressed as long-term climate conditions and catchment properties. ET variance is assessed by records from 32 major river basins across the world. It is found that ET variance is dominated by precipitation variance under hot-dry condition and by evaporative demand variance under cool-wet condition; while the coincidence of water and energy supply controls ET variance under moderate climate condition. Watershed storage change plays an increasing important role in determining ET variance with relatively shorter time scale. By incorporating storage change caused by human interferences, this framework corrects the over-estimation of ET variance in hot-dry climate and under-estimation of ET variance in cool-wet climate. Furthermore, classification of dominant factors on ET variance shows similar patterns as geographic zonation.
NASA Astrophysics Data System (ADS)
Zeng, Ruijie; Cai, Ximing
2015-05-01
Understanding the temporal variance of evapotranspiration (ET) at the catchment scale remains a challenging task, because ET variance results from the complex interactions among climate, soil, vegetation, groundwater and human activities. This study extends the framework for ET variance analysis of Koster and Suarez (1999) by incorporating the water balance and the Budyko hypothesis. ET variance is decomposed into the variance/covariance of precipitation, potential ET, and catchment storage change. The contributions to ET variance from those components are quantified by long-term climate conditions (i.e., precipitation and potential ET) and catchment properties through the Budyko equation. It is found that climate determines ET variance under cool-wet, hot-dry and hot-wet conditions; while both catchment storage change and climate together control ET variance under cool-dry conditions. Thus the major factors of ET variance can be categorized based on the conditions of climate and catchment storage change. To demonstrate the analysis, both the inter- and intra-annul ET variances are assessed in the Murray-Darling Basin, and it is found that the framework corrects the over-estimation of ET variance in the arid basin. This study provides an extended theoretical framework to assess ET temporal variance under the impacts from both climate and storage change at the catchment scale.
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2011-01-01
Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…
40 CFR 142.302 - Who can issue a small system variance?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Who can issue a small system variance? 142.302 Section 142.302 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... General Provisions § 142.302 Who can issue a small system variance? A small system variance under...
40 CFR 142.305 - When can a small system variance be granted by a State?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false When can a small system variance be... (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System General Provisions § 142.305 When can a small system variance be granted by a...
40 CFR 124.63 - Procedures for variances when EPA is the permitting authority.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Procedures for variances when EPA is... Permits § 124.63 Procedures for variances when EPA is the permitting authority. (a) In States where EPA is the permit issuing authority and a request for a variance is filed as required by § 122.21,...
31 CFR 15.737-16 - Proof; variance; amendment of pleadings.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Proof; variance; amendment of... POST EMPLOYMENT CONFLICT OF INTEREST Administrative Enforcement Proceedings § 15.737-16 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in a pleading and the...
40 CFR 142.44 - Public hearings on variances and schedules.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Public hearings on variances and... PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.44 Public hearings on variances and schedules. (a) Before...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 3 2010-07-01 2010-07-01 false Variances for delay in contemporaneous... REQUIREMENTS FOR PERMITS FOR SPECIAL CATEGORIES OF MINING § 785.18 Variances for delay in contemporaneous... mining activities where a variance is requested from the contemporaneous reclamation requirements...
42 CFR 456.524 - Notification of Administrator's action and duration of variance.
Code of Federal Regulations, 2010 CFR
2010-10-01
... of variance. 456.524 Section 456.524 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES... Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time Requirements § 456.524 Notification of Administrator's action and duration...
29 CFR 1905.6 - Public notice of a granted variance, limitation, variation, tolerance, or exemption.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 5 2010-07-01 2010-07-01 false Public notice of a granted variance, limitation, variation... SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RULES OF PRACTICE FOR VARIANCES, LIMITATIONS... General § 1905.6 Public notice of a granted variance, limitation, variation, tolerance, or...
29 CFR 1952.9 - Variances affecting multi-state employers.
Code of Federal Regulations, 2010 CFR
2010-07-01
... elect to apply to the Assistant Secretary for such variance under the provisions of 29 CFR part 1905, as... 29 Labor 9 2010-07-01 2010-07-01 false Variances affecting multi-state employers. 1952.9 Section... and Conditions § 1952.9 Variances affecting multi-state employers. (a) Where a State standard...
Modeling Heterogeneous Variance-Covariance Components in Two-Level Models
ERIC Educational Resources Information Center
Leckie, George; French, Robert; Charlton, Chris; Browne, William
2014-01-01
Applications of multilevel models to continuous outcomes nearly always assume constant residual variance and constant random effects variances and covariances. However, modeling heterogeneity of variance can prove a useful indicator of model misspecification, and in some educational and behavioral studies, it may even be of direct substantive…
29 CFR 1926.2 - Variances from safety and health standards.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., DEPARTMENT OF LABOR (CONTINUED) SAFETY AND HEALTH REGULATIONS FOR CONSTRUCTION General § 1926.2 Variances... for variances under Williams-Steiger Occupational Safety and Health Act with respect to construction safety or health standards shall be considered to be also variances under the Construction Safety...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-15
... Variance of License Article 403 and Soliciting Comments, Motions to Intervene and Protests Take notice that... inspection: a. Application Type: Extension of temporary variance of license article 403. b. Project No: 12514... Commission to grant an extension of time to a temporary variance of license Article 403 that was granted...
Latent Variable Models of Need for Uniqueness.
Tepper, K; Hoyle, R H
1996-10-01
The theory of uniqueness has been invoked to explain attitudinal and behavioral nonconformity with respect to peer-group, social-cultural, and statistical norms, as well as the development of a distinctive view of self via seeking novelty goods, adopting new products, acquiring scarce commodities, and amassing material possessions. Present research endeavors in psychology and consumer behavior are inhibited by uncertainty regarding the psychometric properties of the Need for Uniqueness Scale, the primary instrument for measuring individual differences in uniqueness motivation. In an important step toward facilitating research on uniqueness motivation, we used confirmatory factor analysis to evaluate three a priori latent variable models of responses to the Need for Uniqueness Scale. Among the a priori models, an oblique three-factor model best accounted for commonality among items. Exploratory factor analysis followed by estimation of unrestricted three- and four-factor models revealed that a model with a complex pattern of loadings on four modestly correlated factors may best explain the latent structure of the Need for Uniqueness Scale. Additional analyses evaluated the associations among the three a priori factors and an array of individual differences. Results of those analyses indicated the need to distinguish among facets of the uniqueness motive in behavioral research. PMID:26788594
Higher-derivative superparticle in AdS3 space
NASA Astrophysics Data System (ADS)
Kozyrev, Nikolay; Krivonos, Sergey; Lechtenfeld, Olaf
2016-03-01
Employing the coset approach we construct component actions for a superparticle moving in AdS3 with N =(2 ,0 ), D =3 supersymmetry partially broken to N =2 , d =1 . These actions may contain higher time-derivative terms, which are chosen to possess the same (super)symmetries as the free superparticle. In terms of the nonlinear-realization superfields, the component actions always take a simpler form when written in terms of covariant Cartan forms. We also consider in detail the reduction to the nonrelativistic case and construct the corresponding action of a Newton-Hooke superparticle and its higher-derivative generalizations. The structure of these higher time-derivative generalizations is completely fixed by invariance under the supersymmetric Newton-Hooke algebra extended by two central charges.
Aspects of warped AdS3/CFT2 correspondence
NASA Astrophysics Data System (ADS)
Chen, Bin; Zhang, Jia-Ju; Zhang, Jian-Dong; Zhong, De-Liang
2013-04-01
In this paper we apply the thermodynamics method to investigate the holographic pictures for the BTZ black hole, the spacelike and the null warped black holes in three-dimensional topologically massive gravity (TMG) and new massive gravity (NMG). Even though there are higher derivative terms in these theories, the thermodynamics method is still effective. It gives consistent results with the ones obtained by using asymptotical symmetry group (ASG) analysis. In doing the ASG analysis we develop a brute-force realization of the Barnich-Brandt-Compere formalism with Mathematica code, which also allows us to calculate the masses and the angular momenta of the black holes. In particular, we propose the warped AdS3/CFT2 correspondence in the new massive gravity, which states that quantum gravity in the warped spacetime could holographically dual to a two-dimensional CFT with {c_R}={c_L}=24 /{Gm{β^2√{{2( {21-4{β^2}} )}}}}.
Thermodynamics of charged Lovelock: AdS black holes
NASA Astrophysics Data System (ADS)
Prasobh, C. B.; Suresh, Jishnu; Kuriakose, V. C.
2016-04-01
We investigate the thermodynamic behavior of maximally symmetric charged, asymptotically AdS black hole solutions of Lovelock gravity. We explore the thermodynamic stability of such solutions by the ordinary method of calculating the specific heat of the black holes and investigating its divergences which signal second-order phase transitions between black hole states. We then utilize the methods of thermodynamic geometry of black hole spacetimes in order to explain the origin of these points of divergence. We calculate the curvature scalar corresponding to a Legendre-invariant thermodynamic metric of these spacetimes and find that the divergences in the black hole specific heat correspond to singularities in the thermodynamic phase space. We also calculate the area spectrum for large black holes in the model by applying the Bohr-Sommerfeld quantization to the adiabatic invariant calculated for the spacetime.
Vortex hair on AdS black holes
NASA Astrophysics Data System (ADS)
Gregory, Ruth; Gustainis, Peter C.; Kubizňák, David; Mann, Robert B.; Wills, Danielle
2014-11-01
We analyse vortex hair for charged rotating asymptotically AdS black holes in the abelian Higgs model. We give analytical and numerical arguments to show how the vortex interacts with the horizon of the black hole, and how the solution extends to the boundary. The solution is very close to the corresponding asymptotically flat vortex, once one transforms to a frame that is non-rotating at the boundary. We show that there is a Meissner effect for extremal black holes, with the vortex flux being expelled from sufficiently small black holes. The phase transition is shown to be first order in the presence of rotation, but second order without rotation. We comment on applications to holography.
An investigation of AdS2 backreaction and holography
NASA Astrophysics Data System (ADS)
Engelsöy, Julius; Mertens, Thomas G.; Verlinde, Herman
2016-07-01
We investigate a dilaton gravity model in AdS2 proposed by Almheiri and Polchinski [1] and develop a 1d effective description in terms of a dynamical boundary time with a Schwarzian derivative action. We show that the effective model is equivalent to a 1d version of Liouville theory, and investigate its dynamics and symmetries via a standard canonical framework. We include the coupling to arbitrary conformal matter and analyze the effective action in the presence of possible sources. We compute commutators of local operators at large time separation, and match the result with the time shift due to a gravitational shockwave interaction. We study a black hole evaporation process and comment on the role of entropy in this model.
Estimating discharge measurement uncertainty using the interpolated variance estimator
Cohn, T.; Kiang, J.; Mason, R., Jr.
2012-01-01
Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.
Estimation of measurement variance in the context of environment statistics
NASA Astrophysics Data System (ADS)
Maiti, Pulakesh
2015-02-01
The object of environment statistics is for providing information on the environment, on its most important changes over time, across locations and identifying the main factors that influence them. Ultimately environment statistics would be required to produce higher quality statistical information. For this timely, reliable and comparable data are needed. Lack of proper and uniform definitions, unambiguous classifications pose serious problems to procure qualitative data. These cause measurement errors. We consider the problem of estimating measurement variance so that some measures may be adopted to improve upon the quality of data on environmental goods and services and on value statement in economic terms. The measurement technique considered here is that of employing personal interviewers and the sampling considered here is that of two-stage sampling.
Linear minimum variance filters applied to carrier tracking
NASA Technical Reports Server (NTRS)
Gustafson, D. E.; Speyer, J. L.
1976-01-01
A new approach is taken to the problem of tracking a fixed amplitude signal with a Brownian-motion phase process. Classically, a first-order phase-lock loop (PLL) is used; here, the problem is treated via estimation of the quadrature signal components. In this space, the state dynamics are linear with white multiplicative noise. Therefore, linear minimum-variance filters, which have a particularly simple mechanization, are suggested. The resulting error dynamics are linear at any signal/noise ratio, unlike the classical PLL. During synchronization, and above threshold, this filter with constant gains degrades by 3 per cent in output rms phase error with respect to the classical loop. However, up to 80 per cent of the maximum possible noise improvement is obtained below threshold, where the classical loop is nonoptimum, as demonstrated by a Monte Carlo analysis. Filter mechanizations are presented for both carrier and baseband operation.
Analysis of variance of an underdetermined geodetic displacement problem
Darby, D.
1982-06-01
It has been suggested recently that point displacements in a free geodetic network traversing a strike-slip fault may be estimated from repeated surveys by minimizing only those displacement components normal to the strike. It is desirable to justify this procedure. We construct, from estimable quantities, a deformation parameter which is an F-statistic of the type occurring in the analysis of variance of linear models not of full rank. A test of its significance provides the criterion to justify the displacement solution. It is also interesting to study its behaviour as one varies the supposed strike of the fault. Justification of a displacement solution using data from a strike-slip fault is found, but not for data from a rift valley. The technique can be generalized to more complex patterns of deformation such as those expected near the end-zone of a fault in a dislocation model.
A Posteriori Correction of Forecast and Observation Error Variances
NASA Technical Reports Server (NTRS)
Rukhovets, Leonid
2005-01-01
Proposed method of total observation and forecast error variance correction is based on the assumption about normal distribution of "observed-minus-forecast" residuals (O-F), where O is an observed value and F is usually a short-term model forecast. This assumption can be accepted for several types of observations (except humidity) which are not grossly in error. Degree of nearness to normal distribution can be estimated by the symmetry or skewness (luck of symmetry) a(sub 3) = mu(sub 3)/sigma(sup 3) and kurtosis a(sub 4) = mu(sub 4)/sigma(sup 4) - 3 Here mu(sub i) = i-order moment, sigma is a standard deviation. It is well known that for normal distribution a(sub 3) = a(sub 4) = 0.
The use of analysis of variance procedures in biological studies
Williams, B.K.
1987-01-01
The analysis of variance (ANOVA) is widely used in biological studies, yet there remains considerable confusion among researchers about the interpretation of hypotheses being tested. Ambiguities arise when statistical designs are unbalanced, and in particular when not all combinations of design factors are represented in the data. This paper clarifies the relationship among hypothesis testing, statistical modelling and computing procedures in ANOVA for unbalanced data. A simple two-factor fixed effects design is used to illustrate three common parametrizations for ANOVA models, and some associations among these parametrizations are developed. Biologically meaningful hypotheses for main effects and interactions are given in terms of each parametrization, and procedures for testing the hypotheses are described. The standard statistical computing procedures in ANOVA are given along with their corresponding hypotheses. Throughout the development unbalanced designs are assumed and attention is given to problems that arise with missing cells.