40 CFR 142.307 - What terms and conditions must be included in a small system variance?
Code of Federal Regulations, 2012 CFR
2012-07-01
... improvements to comply with the small system variance technology, secure an alternative source of water, or... included in a small system variance? 142.307 Section 142.307 Protection of Environment ENVIRONMENTAL... IMPLEMENTATION Variances for Small System Review of Small System Variance Application § 142.307 What terms and...
40 CFR 142.307 - What terms and conditions must be included in a small system variance?
Code of Federal Regulations, 2013 CFR
2013-07-01
... improvements to comply with the small system variance technology, secure an alternative source of water, or... included in a small system variance? 142.307 Section 142.307 Protection of Environment ENVIRONMENTAL... IMPLEMENTATION Variances for Small System Review of Small System Variance Application § 142.307 What terms and...
40 CFR 142.307 - What terms and conditions must be included in a small system variance?
Code of Federal Regulations, 2014 CFR
2014-07-01
... improvements to comply with the small system variance technology, secure an alternative source of water, or... included in a small system variance? 142.307 Section 142.307 Protection of Environment ENVIRONMENTAL... IMPLEMENTATION Variances for Small System Review of Small System Variance Application § 142.307 What terms and...
40 CFR 142.307 - What terms and conditions must be included in a small system variance?
Code of Federal Regulations, 2011 CFR
2011-07-01
... improvements to comply with the small system variance technology, secure an alternative source of water, or... included in a small system variance? 142.307 Section 142.307 Protection of Environment ENVIRONMENTAL... IMPLEMENTATION Variances for Small System Review of Small System Variance Application § 142.307 What terms and...
40 CFR 142.302 - Who can issue a small system variance?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Who can issue a small system variance? 142.302 Section 142.302 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... General Provisions § 142.302 Who can issue a small system variance? A small system variance under this...
40 CFR 142.303 - Which size public water systems can receive a small system variance?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Which size public water systems can receive a small system variance? 142.303 Section 142.303 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System General...
40 CFR 142.303 - Which size public water systems can receive a small system variance?
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Which size public water systems can receive a small system variance? 142.303 Section 142.303 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System General...
Code of Federal Regulations, 2011 CFR
2011-07-01
... State proposes to grant a small system variance to a public water system serving a population of more... water system serving a population of more than 3,300 and fewer than 10,000 persons? (a) At the time a State proposes to grant a small system variance to a public water system serving a population of more...
Code of Federal Regulations, 2012 CFR
2012-07-01
... State proposes to grant a small system variance to a public water system serving a population of more... water system serving a population of more than 3,300 and fewer than 10,000 persons? (a) At the time a State proposes to grant a small system variance to a public water system serving a population of more...
Code of Federal Regulations, 2013 CFR
2013-07-01
... State proposes to grant a small system variance to a public water system serving a population of more... water system serving a population of more than 3,300 and fewer than 10,000 persons? (a) At the time a State proposes to grant a small system variance to a public water system serving a population of more...
40 CFR 142.304 - For which of the regulatory requirements is a small system variance available?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 23 2011-07-01 2011-07-01 false For which of the regulatory requirements is a small system variance available? 142.304 Section 142.304 Protection of Environment... REGULATIONS IMPLEMENTATION Variances for Small System General Provisions § 142.304 For which of the regulatory...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false What procedures allow the Administrator to object to a proposed small system variance or overturn a granted small system variance for a public water system serving 3,300 or fewer persons? 142.311 Section 142.311 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false What procedures allow the Administrator to object to a proposed small system variance or overturn a granted small system variance for a public water system serving 3,300 or fewer persons? 142.311 Section 142.311 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER...
Small Drinking Water System Variances
Small system variances allow a small system to install and maintain technology that can remove a contaminant to the maximum extent that is affordable and protective of public health in lieu of technology that can achieve compliance with the regulation.
40 CFR 142.301 - What is a small system variance?
Code of Federal Regulations, 2011 CFR
2011-07-01
....301 Section 142.301 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System... procedures and criteria for obtaining these variances. The regulations in this subpart shall take effect on...
40 CFR 142.301 - What is a small system variance?
Code of Federal Regulations, 2012 CFR
2012-07-01
....301 Section 142.301 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System... procedures and criteria for obtaining these variances. The regulations in this subpart shall take effect on...
40 CFR 142.301 - What is a small system variance?
Code of Federal Regulations, 2010 CFR
2010-07-01
....301 Section 142.301 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System... procedures and criteria for obtaining these variances. The regulations in this subpart shall take effect on...
40 CFR 142.301 - What is a small system variance?
Code of Federal Regulations, 2013 CFR
2013-07-01
....301 Section 142.301 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System... procedures and criteria for obtaining these variances. The regulations in this subpart shall take effect on...
40 CFR 142.301 - What is a small system variance?
Code of Federal Regulations, 2014 CFR
2014-07-01
....301 Section 142.301 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System... procedures and criteria for obtaining these variances. The regulations in this subpart shall take effect on...
Code of Federal Regulations, 2010 CFR
2010-07-01
... meets the source water quality requirements for installing the small system variance technology...: (i) The quality of the source water for the public water system; and (ii) Removal efficiencies and expected useful life of the small system variance technology. ...
Code of Federal Regulations, 2011 CFR
2011-07-01
... meets the source water quality requirements for installing the small system variance technology...: (i) The quality of the source water for the public water system; and (ii) Removal efficiencies and expected useful life of the small system variance technology. ...
40 CFR 142.307 - What terms and conditions must be included in a small system variance?
Code of Federal Regulations, 2010 CFR
2010-07-01
... that may affect proper and effective operation and maintenance of the technology; (2) Monitoring... effective installation, operation and maintenance of the applicable small system variance technology in... health, which may include: (i) Public education requirements; and (ii) Source water protection...
40 CFR 142.313 - How will the Administrator review a State's program under this subpart?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 23 2011-07-01 2011-07-01 false How will the Administrator review a... PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System Epa Review and Approval of Small System Variances § 142.313 How will...
40 CFR 142.313 - How will the Administrator review a State's program under this subpart?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false How will the Administrator review a... PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System Epa Review and Approval of Small System Variances § 142.313 How will...
Code of Federal Regulations, 2012 CFR
2012-07-01
... the Act; and (C) Ownership changes, physical consolidation with another public water system, or other... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER... responsibility may issue variances to public water systems (other than small system variances) from the...
Code of Federal Regulations, 2011 CFR
2011-07-01
... the Act; and (C) Ownership changes, physical consolidation with another public water system, or other... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER... responsibility may issue variances to public water systems (other than small system variances) from the...
Code of Federal Regulations, 2010 CFR
2010-07-01
... State must consider the availability of an alternative source of water, including the feasibility of... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER... responsibility may issue variances to public water systems (other than small system variances) from the...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false What EPA action is necessary when a State proposes to grant a small system variance to a public water system serving a population of more than 3,300 and fewer than 10,000 persons? 142.312 Section 142.312 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAM...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false What EPA action is necessary when a State proposes to grant a small system variance to a public water system serving a population of more than 3,300 and fewer than 10,000 persons? 142.312 Section 142.312 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAM...
1991-12-01
Kalman filtering. As GPS usage expands throughout the military and civilian communities, I hope this thesis provides a small contribution in this area...of the measurement’equation. In this thesis, some of the INS states not part of a measurement equation need a small amount of added noise to...estimating the state, but the variance often goes negative. A small amount of added noise in the filter keeps the variance of the state positive and does not
The magnitude and colour of noise in genetic negative feedback systems.
Voliotis, Margaritis; Bowsher, Clive G
2012-08-01
The comparative ability of transcriptional and small RNA-mediated negative feedback to control fluctuations or 'noise' in gene expression remains unexplored. Both autoregulatory mechanisms usually suppress the average (mean) of the protein level and its variability across cells. The variance of the number of proteins per molecule of mean expression is also typically reduced compared with the unregulated system, but is almost never below the value of one. This relative variance often substantially exceeds a recently obtained, theoretical lower limit for biochemical feedback systems. Adding the transcriptional or small RNA-mediated control has different effects. Transcriptional autorepression robustly reduces both the relative variance and persistence (lifetime) of fluctuations. Both benefits combine to reduce noise in downstream gene expression. Autorepression via small RNA can achieve more extreme noise reduction and typically has less effect on the mean expression level. However, it is often more costly to implement and is more sensitive to rate parameters. Theoretical lower limits on the relative variance are known to decrease slowly as a measure of the cost per molecule of mean expression increases. However, the proportional increase in cost to achieve substantial noise suppression can be different away from the optimal frontier-for transcriptional autorepression, it is frequently negligible.
Feasibility Study for Design of a Biocybernetic Communication System
1975-08-01
electrode for the Within Words variance and Between Words variance for each of the 255 data samples in the 6-sec epoch. If a given sample point was not...contributing to the computer classification of the word, the ratio of the two variances (i.e., the F-statistic) should be small. On the other hand...if the Between Word variance was signifi- cantly higher than the Within Word variance for a given sample point, we can assume with some confidence
The magnitude and colour of noise in genetic negative feedback systems
Voliotis, Margaritis; Bowsher, Clive G.
2012-01-01
The comparative ability of transcriptional and small RNA-mediated negative feedback to control fluctuations or ‘noise’ in gene expression remains unexplored. Both autoregulatory mechanisms usually suppress the average (mean) of the protein level and its variability across cells. The variance of the number of proteins per molecule of mean expression is also typically reduced compared with the unregulated system, but is almost never below the value of one. This relative variance often substantially exceeds a recently obtained, theoretical lower limit for biochemical feedback systems. Adding the transcriptional or small RNA-mediated control has different effects. Transcriptional autorepression robustly reduces both the relative variance and persistence (lifetime) of fluctuations. Both benefits combine to reduce noise in downstream gene expression. Autorepression via small RNA can achieve more extreme noise reduction and typically has less effect on the mean expression level. However, it is often more costly to implement and is more sensitive to rate parameters. Theoretical lower limits on the relative variance are known to decrease slowly as a measure of the cost per molecule of mean expression increases. However, the proportional increase in cost to achieve substantial noise suppression can be different away from the optimal frontier—for transcriptional autorepression, it is frequently negligible. PMID:22581772
Kofman, Rianne; Beekman, Anna M; Emmelot, Cornelis H; Geertzen, Jan H B; Dijkstra, Pieter U
2018-06-01
Non-contact scanners may have potential for measurement of residual limb volume. Different non-contact scanners have been introduced during the last decades. Reliability and usability (practicality and user friendliness) should be assessed before introducing these systems in clinical practice. The aim of this study was to analyze the measurement properties and usability of four non-contact scanners (TT Design, Omega Scanner, BioSculptor Bioscanner, and Rodin4D Scanner). Quasi experimental. Nine (geometric and residual limb) models were measured on two occasions, each consisting of two sessions, thus in total 4 sessions. In each session, four observers used the four systems for volume measurement. Mean for each model, repeatability coefficients for each system, variance components, and their two-way interactions of measurement conditions were calculated. User satisfaction was evaluated with the Post-Study System Usability Questionnaire. Systematic differences between the systems were found in volume measurements. Most of the variances were explained by the model (97%), while error variance was 3%. Measurement system and the interaction between system and model explained 44% of the error variance. Repeatability coefficient of the systems ranged from 0.101 (Omega Scanner) to 0.131 L (Rodin4D). Differences in Post-Study System Usability Questionnaire scores between the systems were small and not significant. The systems were reliable in determining residual limb volume. Measurement systems and the interaction between system and residual limb model explained most of the error variances. The differences in repeatability coefficient and usability between the four CAD/CAM systems were small. Clinical relevance If accurate measurements of residual limb volume are required (in case of research), modern non-contact scanners should be taken in consideration nowadays.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false How can a person served by the public water system obtain EPA review of a State proposed small system variance? 142.310 Section 142.310 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false How can a person served by the public water system obtain EPA review of a State proposed small system variance? 142.310 Section 142.310 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS...
Quasi-biennial Oscillations (QBO) as seen in GPS/CHAMP Tropospheric and Ionospheric Data
NASA Technical Reports Server (NTRS)
Wu, Dong L.; Pi, Xiaoqing; Ao, Chi O.; Mannucci, Anthony J.
2006-01-01
A viewgraph presentation on Quasi-biennial Oscillations (QBO) from Global Positioning System/Challenging Mini-Satellite Payload (GPS/CHAMP) tropospheric and ionsopheric data is shown. The topics include: 1) A brief review of QBO; 2) Characteristics of small-scale oscillations in GPS/CHAMP 50-Hz raw measurements; 3) Variations of lower atmospheric variances; and 4) Variations of E-region variances.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 24 2012-07-01 2012-07-01 false What public notice is required before... PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System Public Participation § 142.308 What public notice is required before a State or the Administrator proposes to issue a small...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false What public notice is required before... PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System Public Participation § 142.308 What public notice is required before a State or the Administrator proposes to issue a small...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 24 2013-07-01 2013-07-01 false What public notice is required before... PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System Public Participation § 142.308 What public notice is required before a State or the Administrator proposes to issue a small...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 23 2011-07-01 2011-07-01 false What public notice is required before... PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System Public Participation § 142.308 What public notice is required before a State or the Administrator proposes to issue a small...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false What public notice is required before... PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System Public Participation § 142.308 What public notice is required before a State or the Administrator proposes to issue a small...
NASA Astrophysics Data System (ADS)
Reynders, Edwin P. B.; Langley, Robin S.
2018-08-01
The hybrid deterministic-statistical energy analysis method has proven to be a versatile framework for modeling built-up vibro-acoustic systems. The stiff system components are modeled deterministically, e.g., using the finite element method, while the wave fields in the flexible components are modeled as diffuse. In the present paper, the hybrid method is extended such that not only the ensemble mean and variance of the harmonic system response can be computed, but also of the band-averaged system response. This variance represents the uncertainty that is due to the assumption of a diffuse field in the flexible components of the hybrid system. The developments start with a cross-frequency generalization of the reciprocity relationship between the total energy in a diffuse field and the cross spectrum of the blocked reverberant loading at the boundaries of that field. By making extensive use of this generalization in a first-order perturbation analysis, explicit expressions are derived for the cross-frequency and band-averaged variance of the vibrational energies in the diffuse components and for the cross-frequency and band-averaged variance of the cross spectrum of the vibro-acoustic field response of the deterministic components. These expressions are extensively validated against detailed Monte Carlo analyses of coupled plate systems in which diffuse fields are simulated by randomly distributing small point masses across the flexible components, and good agreement is found.
Aperture averaging in strong oceanic turbulence
NASA Astrophysics Data System (ADS)
Gökçe, Muhsin Caner; Baykal, Yahya
2018-04-01
Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.
Analytic variance estimates of Swank and Fano factors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank, E-mail: frank.samuelson@fda.hhs.gov
Purpose: Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. Methods: The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data frommore » a Monte Carlo simulation of an indirect x-ray imaging detector. Results: The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. Conclusions: The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.« less
Frank C. Sorensen; T.L. White
1988-01-01
Studies of the mating habits of Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) have shown that wind-pollination families contain a small proportion of very slow-growing natural inbreds.The effect of these very small trees on means, variances, and variance ratios was evaluated for height and diameter in a 16-year-old plantation by...
Standard Deviation for Small Samples
ERIC Educational Resources Information Center
Joarder, Anwar H.; Latif, Raja M.
2006-01-01
Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false What are the responsibilities of the public water system, State and the Administrator in ensuring that sufficient information is available and for evaluation of a small system variance application? 142.306 Section 142.306 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY ...
McCollough, Cynthia H; Ulzheimer, Stefan; Halliburton, Sandra S; Shanneik, Kaiss; White, Richard D; Kalender, Willi A
2007-05-01
To develop a consensus standard for quantification of coronary artery calcium (CAC). A standard for CAC quantification was developed by a multi-institutional, multimanufacturer international consortium of cardiac radiologists, medical physicists, and industry representatives. This report specifically describes the standardization of scan acquisition and reconstruction parameters, the use of patient size-specific tube current values to achieve a prescribed image noise, and the use of the calcium mass score to eliminate scanner- and patient size-based variations. An anthropomorphic phantom containing calibration inserts and additional phantom rings were used to simulate small, medium-size, and large patients. The three phantoms were scanned by using the recommended protocols for various computed tomography (CT) systems to determine the calibration factors that relate measured CT numbers to calcium hydroxyapatite density and to determine the tube current values that yield comparable noise values. Calculation of the calcium mass score was standardized, and the variance in Agatston, volume, and mass scores was compared among CT systems. Use of the recommended scanning parameters resulted in similar noise for small, medium-size, and large phantoms with all multi-detector row CT scanners. Volume scores had greater interscanner variance than did Agatston and calcium mass scores. Use of a fixed calcium hydroxyapatite density threshold (100 mg/cm(3)), as compared with use of a fixed CT number threshold (130 HU), reduced interscanner variability in Agatston and calcium mass scores. With use of a density segmentation threshold, the calcium mass score had the smallest variance as a function of patient size. Standardized quantification of CAC yielded comparable image noise, spatial resolution, and mass scores among different patient sizes and different CT systems and facilitated reduced radiation dose for small and medium-size patients.
Adaptation to Variance of Stimuli in Drosophila Larva Navigation
NASA Astrophysics Data System (ADS)
Wolk, Jason; Gepner, Ruben; Gershow, Marc
In order to respond to stimuli that vary over orders of magnitude while also being capable of sensing very small changes, neural systems must be capable of rapidly adapting to the variance of stimuli. We study this adaptation in Drosophila larvae responding to varying visual signals and optogenetically induced fictitious odors using an infrared illuminated arena and custom computer vision software. Larval navigational decisions (when to turn) are modeled as the output a linear-nonlinear Poisson process. The development of the nonlinear turn rate in response to changes in variance is tracked using an adaptive point process filter determining the rate of adaptation to different stimulus profiles. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.
Enhancing target variance in personality impressions: highlighting the person in person perception.
Paulhus, D L; Reynolds, S
1995-12-01
D. A. Kenny (1994) estimated the components of personality rating variance to be 15, 20, and 20% for target, rater, and relationship, respectively. To enhance trait variance and minimize rater variance, we designed a series of studies of personality perception in discussion groups (N = 79, 58, and 59). After completing a Big Five questionnaire, participants met 7 times in small groups. After Meetings 1 and 7, group members rated each other. By applying the Social Relations Model (D. A. Kenny and L. La Voie, 1984) to each Big Five dimension at each point in time, we were able to evaluate 6 rating effects as well as rating validity. Among the findings were that (a) target variance was the largest component (almost 30%), whereas rater variance was small (less than 11%); (b) rating validity improved significantly with acquaintance, although target variance did not; and (c) no reciprocity was found, but projection was significant for Agreeableness.
Testing Small Variance Priors Using Prior-Posterior Predictive p Values.
Hoijtink, Herbert; van de Schoot, Rens
2017-04-03
Muthén and Asparouhov (2012) propose to evaluate model fit in structural equation models based on approximate (using small variance priors) instead of exact equality of (combinations of) parameters to zero. This is an important development that adequately addresses Cohen's (1994) The Earth is Round (p < .05), which stresses that point null-hypotheses are so precise that small and irrelevant differences from the null-hypothesis may lead to their rejection. It is tempting to evaluate small variance priors using readily available approaches like the posterior predictive p value and the DIC. However, as will be shown, both are not suited for the evaluation of models based on small variance priors. In this article, a well behaving alternative, the prior-posterior predictive p value, will be introduced. It will be shown that it is consistent, the distributions under the null and alternative hypotheses will be elaborated, and it will be applied to testing whether the difference between 2 means and the size of a correlation are relevantly different from zero. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Multivariate classification of small order watersheds in the Quabbin Reservoir Basin, Massachusetts
Lent, R.M.; Waldron, M.C.; Rader, J.C.
1998-01-01
A multivariate approach was used to analyze hydrologic, geologic, geographic, and water-chemistry data from small order watersheds in the Quabbin Reservoir Basin in central Massachusetts. Eighty three small order watersheds were delineated and landscape attributes defining hydrologic, geologic, and geographic features of the watersheds were compiled from geographic information system data layers. Principal components analysis was used to evaluate 11 chemical constituents collected bi-weekly for 1 year at 15 surface-water stations in order to subdivide the basin into subbasins comprised of watersheds with similar water quality characteristics. Three principal components accounted for about 90 percent of the variance in water chemistry data. The principal components were defined as a biogeochemical variable related to wetland density, an acid-neutralization variable, and a road-salt variable related to density of primary roads. Three subbasins were identified. Analysis of variance and multiple comparisons of means were used to identify significant differences in stream water chemistry and landscape attributes among subbasins. All stream water constituents were significantly different among subbasins. Multiple regression techniques were used to relate stream water chemistry to landscape attributes. Important differences in landscape attributes were related to wetlands, slope, and soil type.A multivariate approach was used to analyze hydrologic, geologic, geographic, and water-chemistry data from small order watersheds in the Quabbin Reservoir Basin in central Massachusetts. Eighty three small order watersheds were delineated and landscape attributes defining hydrologic, geologic, and geographic features of the watersheds were compiled from geographic information system data layers. Principal components analysis was used to evaluate 11 chemical constituents collected bi-weekly for 1 year at 15 surface-water stations in order to subdivide the basin into subbasins comprised of watersheds with similar water quality characteristics. Three principal components accounted for about 90 percent of the variance in water chemistry data. The principal components were defined as a biogeochemical variable related to wetland density, an acid-neutralization variable, and a road-salt variable related to density of primary roads. Three subbasins were identified. Analysis of variance and multiple comparisons of means were used to identify significant differences in stream water chemistry and landscape attributes among subbasins. All stream water constituents were significantly different among subbasins. Multiple regression techniques were used to relate stream water chemistry to landscape attributes. Important differences in landscape attributes were related to wetlands, slope, and soil type.
Equifinality and its violations in a redundant system: multifinger accurate force production.
Wilhelm, Luke; Zatsiorsky, Vladimir M; Latash, Mark L
2013-10-01
We explored a hypothesis that transient perturbations applied to a redundant system result in equifinality in the space of task-related performance variables but not in the space of elemental variables. The subjects pressed with four fingers and produced an accurate constant total force level. The "inverse piano" device was used to lift and lower one of the fingers smoothly. The subjects were instructed "not to intervene voluntarily" with possible force changes. Analysis was performed in spaces of finger forces and finger modes (hypothetical neural commands to fingers) as elemental variables. Lifting a finger led to an increase in its force and a decrease in the forces of the other three fingers; the total force increased. Lowering the finger back led to a drop in the force of the perturbed finger. At the final state, the sum of the variances of finger forces/modes computed across repetitive trials was significantly higher than the variance of the total force/mode. Most variance of the individual finger force/mode changes between the preperturbation and postperturbation states was compatible with constant total force. We conclude that a transient perturbation applied to a redundant system leads to relatively small variance in the task-related performance variable (equifinality), whereas in the space of elemental variables much more variance occurs that does not lead to total force changes. We interpret the results within a general theoretical scheme that incorporates the ideas of hierarchically organized control, control with referent configurations, synergic control, and the uncontrolled manifold hypothesis.
Pare, Guillaume; Mao, Shihong; Deng, Wei Q
2016-06-08
Despite considerable efforts, known genetic associations only explain a small fraction of predicted heritability. Regional associations combine information from multiple contiguous genetic variants and can improve variance explained at established association loci. However, regional associations are not easily amenable to estimation using summary association statistics because of sensitivity to linkage disequilibrium (LD). We now propose a novel method, LD Adjusted Regional Genetic Variance (LARGV), to estimate phenotypic variance explained by regional associations using summary statistics while accounting for LD. Our method is asymptotically equivalent to a multiple linear regression model when no interaction or haplotype effects are present. It has several applications, such as ranking of genetic regions according to variance explained or comparison of variance explained by two or more regions. Using height and BMI data from the Health Retirement Study (N = 7,776), we show that most genetic variance lies in a small proportion of the genome and that previously identified linkage peaks have higher than expected regional variance.
Pare, Guillaume; Mao, Shihong; Deng, Wei Q.
2016-01-01
Despite considerable efforts, known genetic associations only explain a small fraction of predicted heritability. Regional associations combine information from multiple contiguous genetic variants and can improve variance explained at established association loci. However, regional associations are not easily amenable to estimation using summary association statistics because of sensitivity to linkage disequilibrium (LD). We now propose a novel method, LD Adjusted Regional Genetic Variance (LARGV), to estimate phenotypic variance explained by regional associations using summary statistics while accounting for LD. Our method is asymptotically equivalent to a multiple linear regression model when no interaction or haplotype effects are present. It has several applications, such as ranking of genetic regions according to variance explained or comparison of variance explained by two or more regions. Using height and BMI data from the Health Retirement Study (N = 7,776), we show that most genetic variance lies in a small proportion of the genome and that previously identified linkage peaks have higher than expected regional variance. PMID:27273519
Hu, Jianhua; Wright, Fred A
2007-03-01
The identification of the genes that are differentially expressed in two-sample microarray experiments remains a difficult problem when the number of arrays is very small. We discuss the implications of using ordinary t-statistics and examine other commonly used variants. For oligonucleotide arrays with multiple probes per gene, we introduce a simple model relating the mean and variance of expression, possibly with gene-specific random effects. Parameter estimates from the model have natural shrinkage properties that guard against inappropriately small variance estimates, and the model is used to obtain a differential expression statistic. A limiting value to the positive false discovery rate (pFDR) for ordinary t-tests provides motivation for our use of the data structure to improve variance estimates. Our approach performs well compared to other proposed approaches in terms of the false discovery rate.
Upper and lower bounds for semi-Markov reliability models of reconfigurable systems
NASA Technical Reports Server (NTRS)
White, A. L.
1984-01-01
This paper determines the information required about system recovery to compute the reliability of a class of reconfigurable systems. Upper and lower bounds are derived for these systems. The class consists of those systems that satisfy five assumptions: the components fail independently at a low constant rate, fault occurrence and system reconfiguration are independent processes, the reliability model is semi-Markov, the recovery functions which describe system configuration have small means and variances, and the system is well designed. The bounds are easy to compute, and examples are included.
40 CFR 142.304 - For which of the regulatory requirements is a small system variance available?
Code of Federal Regulations, 2010 CFR
2010-07-01
... subpart for a national primary drinking water regulation for a microbial contaminant (including a bacterium, virus, or other organism) or an indicator or treatment technique for a microbial contaminant. (b... requirement specifying a maximum contaminant level or treatment technique for a contaminant with respect to...
Equifinality and its violations in a redundant system: multifinger accurate force production
Wilhelm, Luke; Zatsiorsky, Vladimir M.
2013-01-01
We explored a hypothesis that transient perturbations applied to a redundant system result in equifinality in the space of task-related performance variables but not in the space of elemental variables. The subjects pressed with four fingers and produced an accurate constant total force level. The “inverse piano” device was used to lift and lower one of the fingers smoothly. The subjects were instructed “not to intervene voluntarily” with possible force changes. Analysis was performed in spaces of finger forces and finger modes (hypothetical neural commands to fingers) as elemental variables. Lifting a finger led to an increase in its force and a decrease in the forces of the other three fingers; the total force increased. Lowering the finger back led to a drop in the force of the perturbed finger. At the final state, the sum of the variances of finger forces/modes computed across repetitive trials was significantly higher than the variance of the total force/mode. Most variance of the individual finger force/mode changes between the preperturbation and postperturbation states was compatible with constant total force. We conclude that a transient perturbation applied to a redundant system leads to relatively small variance in the task-related performance variable (equifinality), whereas in the space of elemental variables much more variance occurs that does not lead to total force changes. We interpret the results within a general theoretical scheme that incorporates the ideas of hierarchically organized control, control with referent configurations, synergic control, and the uncontrolled manifold hypothesis. PMID:23904497
Global Distributions of Temperature Variances At Different Stratospheric Altitudes From Gps/met Data
NASA Astrophysics Data System (ADS)
Gavrilov, N. M.; Karpova, N. V.; Jacobi, Ch.
The GPS/MET measurements at altitudes 5 - 35 km are used to obtain global distribu- tions of small-scale temperature variances at different stratospheric altitudes. Individ- ual temperature profiles are smoothed using second order polynomial approximations in 5 - 7 km thick layers centered at 10, 20 and 30 km. Temperature inclinations from the averaged values and their variances obtained for each profile are averaged for each month of year during the GPS/MET experiment. Global distributions of temperature variances have inhomogeneous structure. Locations and latitude distributions of the maxima and minima of the variances depend on altitudes and season. One of the rea- sons for the small-scale temperature perturbations in the stratosphere could be internal gravity waves (IGWs). Some assumptions are made about peculiarities of IGW gener- ation and propagation in the tropo-stratosphere based on the results of GPS/MET data analysis.
ERIC Educational Resources Information Center
Metsamuuronen, Jari; Kuosa, Tuomo; Laukkanen, Reijo
2013-01-01
Purpose: During the new millennium the Finnish educational system has faced a new challenge: how to explain glorious PISA results produced with only a small variance between schools, average national costs and, as regards the average duration of studies, relatively efficiently. Explanations for this issue can be searched for in many different…
Consistent Small-Sample Variances for Six Gamma-Family Measures of Ordinal Association
ERIC Educational Resources Information Center
Woods, Carol M.
2009-01-01
Gamma-family measures are bivariate ordinal correlation measures that form a family because they all reduce to Goodman and Kruskal's gamma in the absence of ties (1954). For several gamma-family indices, more than one variance estimator has been introduced. In previous research, the "consistent" variance estimator described by Cliff and…
Time and resource limits on working memory: cross-age consistency in counting span performance.
Ransdell, Sarah; Hecht, Steven
2003-12-01
This longitudinal study separated resource demand effects from those of retention interval in a counting span task among 100 children tested in grade 2 and again in grades 3 and 4. A last card large counting span condition had an equivalent memory load to a last card small, but the last card large required holding the count over a longer retention interval. In all three waves of assessment, the last card large condition was found to be less accurate than the last card small. A model predicting reading comprehension showed that age was a significant predictor when entered first accounting for 26% of the variance, but counting span accounted for a further 22% of the variance. Span at Wave 1 accounted for significant unique variance at Wave 2 and at Wave 3. Results were similar for math calculation with age accounting for 31% of the variance and counting span accounting for a further 34% of the variance. Span at Wave 1 explained unique variance in math at Wave 2 and at Wave 3.
Wickenberg-Bolin, Ulrika; Göransson, Hanna; Fryknäs, Mårten; Gustafsson, Mats G; Isaksson, Anders
2006-03-13
Supervised learning for classification of cancer employs a set of design examples to learn how to discriminate between tumors. In practice it is crucial to confirm that the classifier is robust with good generalization performance to new examples, or at least that it performs better than random guessing. A suggested alternative is to obtain a confidence interval of the error rate using repeated design and test sets selected from available examples. However, it is known that even in the ideal situation of repeated designs and tests with completely novel samples in each cycle, a small test set size leads to a large bias in the estimate of the true variance between design sets. Therefore different methods for small sample performance estimation such as a recently proposed procedure called Repeated Random Sampling (RSS) is also expected to result in heavily biased estimates, which in turn translates into biased confidence intervals. Here we explore such biases and develop a refined algorithm called Repeated Independent Design and Test (RIDT). Our simulations reveal that repeated designs and tests based on resampling in a fixed bag of samples yield a biased variance estimate. We also demonstrate that it is possible to obtain an improved variance estimate by means of a procedure that explicitly models how this bias depends on the number of samples used for testing. For the special case of repeated designs and tests using new samples for each design and test, we present an exact analytical expression for how the expected value of the bias decreases with the size of the test set. We show that via modeling and subsequent reduction of the small sample bias, it is possible to obtain an improved estimate of the variance of classifier performance between design sets. However, the uncertainty of the variance estimate is large in the simulations performed indicating that the method in its present form cannot be directly applied to small data sets.
Braaf, Boy; Donner, Sabine; Nam, Ahhyun S.; Bouma, Brett E.; Vakoc, Benjamin J.
2018-01-01
Complex differential variance (CDV) provides phase-sensitive angiographic imaging for optical coherence tomography (OCT) with immunity to phase-instabilities of the imaging system and small-scale axial bulk motion. However, like all angiographic methods, measurement noise can result in erroneous indications of blood flow that confuse the interpretation of angiographic images. In this paper, a modified CDV algorithm that corrects for this noise-bias is presented. This is achieved by normalizing the CDV signal by analytically derived upper and lower limits. The noise-bias corrected CDV algorithm was implemented into an experimental 1 μm wavelength OCT system for retinal imaging that used an eye tracking scanner laser ophthalmoscope at 815 nm for compensation of lateral eye motions. The noise-bias correction improved the CDV imaging of the blood flow in tissue layers with a low signal-to-noise ratio and suppressed false indications of blood flow outside the tissue. In addition, the CDV signal normalization suppressed noise induced by galvanometer scanning errors and small-scale lateral motion. High quality cross-section and motion-corrected en face angiograms of the retina and choroid are presented. PMID:29552388
Braaf, Boy; Donner, Sabine; Nam, Ahhyun S; Bouma, Brett E; Vakoc, Benjamin J
2018-02-01
Complex differential variance (CDV) provides phase-sensitive angiographic imaging for optical coherence tomography (OCT) with immunity to phase-instabilities of the imaging system and small-scale axial bulk motion. However, like all angiographic methods, measurement noise can result in erroneous indications of blood flow that confuse the interpretation of angiographic images. In this paper, a modified CDV algorithm that corrects for this noise-bias is presented. This is achieved by normalizing the CDV signal by analytically derived upper and lower limits. The noise-bias corrected CDV algorithm was implemented into an experimental 1 μm wavelength OCT system for retinal imaging that used an eye tracking scanner laser ophthalmoscope at 815 nm for compensation of lateral eye motions. The noise-bias correction improved the CDV imaging of the blood flow in tissue layers with a low signal-to-noise ratio and suppressed false indications of blood flow outside the tissue. In addition, the CDV signal normalization suppressed noise induced by galvanometer scanning errors and small-scale lateral motion. High quality cross-section and motion-corrected en face angiograms of the retina and choroid are presented.
Effects of nonmagnetic disorder on the energy of Yu-Shiba-Rusinov states
NASA Astrophysics Data System (ADS)
Kiendl, Thomas; von Oppen, Felix; Brouwer, Piet W.
2017-10-01
We study the sensitivity of Yu-Shiba-Rusinov states, bound states that form around magnetic scatterers in superconductors, to the presence of nonmagnetic disorder in both two and three dimensional systems. We formulate a scattering approach to this problem and reduce the effects of disorder to two contributions: disorder-induced normal reflection and a random phase of the amplitude for Andreev reflection. We find that both of these are small even for moderate amounts of disorder. In the dirty limit in which the disorder-induced mean free path is smaller than the superconducting coherence length, the variance of the energy of the Yu-Shiba-Rusinov state remains small in the ratio of the Fermi wavelength and the mean free path. This effect is more pronounced in three dimensions, where only impurities within a few Fermi wavelengths of the magnetic scatterer contribute. In two dimensions the energy variance is larger by a logarithmic factor because impurities contribute up to a distance of the order of the superconducting coherence length.
NASA Astrophysics Data System (ADS)
Dorrestijn, Jesse; Kahn, Brian H.; Teixeira, João; Irion, Fredrick W.
2018-05-01
Satellite observations are used to obtain vertical profiles of variance scaling of temperature (T) and specific humidity (q) in the atmosphere. A higher spatial resolution nadir retrieval at 13.5 km complements previous Atmospheric Infrared Sounder (AIRS) investigations with 45 km resolution retrievals and enables the derivation of power law scaling exponents to length scales as small as 55 km. We introduce a variable-sized circular-area Monte Carlo methodology to compute exponents instantaneously within the swath of AIRS that yields additional insight into scaling behavior. While this method is approximate and some biases are likely to exist within non-Gaussian portions of the satellite observational swaths of T and q, this method enables the estimation of scale-dependent behavior within instantaneous swaths for individual tropical and extratropical systems of interest. Scaling exponents are shown to fluctuate between β = -1 and -3 at scales ≥ 500 km, while at scales ≤ 500 km they are typically near β ≈ -2, with q slightly lower than T at the smallest scales observed. In the extratropics, the large-scale β is near -3. Within the tropics, however, the large-scale β for T is closer to -1 as small-scale moist convective processes dominate. In the tropics, q exhibits large-scale β between -2 and -3. The values of β are generally consistent with previous works of either time-averaged spatial variance estimates, or aircraft observations that require averaging over numerous flight observational segments. The instantaneous variance scaling methodology is relevant for cloud parameterization development and the assessment of time variability of scaling exponents.
Adaptive increase in force variance during fatigue in tasks with low redundancy.
Singh, Tarkeshwar; S K M, Varadhan; Zatsiorsky, Vladimir M; Latash, Mark L
2010-11-26
We tested a hypothesis that fatigue of an element (a finger) leads to an adaptive neural strategy that involves an increase in force variability in the other finger(s) and an increase in co-variation of commands to fingers to keep total force variability relatively unchanged. We tested this hypothesis using a system with small redundancy (two fingers) and a marginally redundant system (with an additional constraint related to the total moment of force produced by the fingers, unstable condition). The subjects performed isometric accurate rhythmic force production tasks by the index (I) finger and two fingers (I and middle, M) pressing together before and after a fatiguing exercise by the I finger. Fatigue led to a large increase in force variance in the I-finger task and a smaller increase in the IM-task. We quantified two components of variance in the space of hypothetical commands to fingers, finger modes. Under both stable and unstable conditions, there was a large increase in the variance component that did not affect total force and a much smaller increase in the component that did. This resulted in an increase in an index of the force-stabilizing synergy. These results indicate that marginal redundancy is sufficient to allow the central nervous system to use adaptive increase in variability to shield important variables from effects of fatigue. We offer an interpretation of these results based on a recent development of the equilibrium-point hypothesis known as the referent configuration hypothesis. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Motor equivalence during multi-finger accurate force production
Mattos, Daniela; Schöner, Gregor; Zatsiorsky, Vladimir M.; Latash, Mark L.
2014-01-01
We explored stability of multi-finger cyclical accurate force production action by analysis of responses to small perturbations applied to one of the fingers and inter-cycle analysis of variance. Healthy subjects performed two versions of the cyclical task, with and without an explicit target. The “inverse piano” apparatus was used to lift/lower a finger by 1 cm over 0.5 s; the subjects were always instructed to perform the task as accurate as they could at all times. Deviations in the spaces of finger forces and modes (hypothetical commands to individual fingers) were quantified in directions that did not change total force (motor equivalent) and in directions that changed the total force (non-motor equivalent). Motor equivalent deviations started immediately with the perturbation and increased progressively with time. After a sequence of lifting-lowering perturbations leading to the initial conditions, motor equivalent deviations were dominating. These phenomena were less pronounced for analysis performed with respect to the total moment of force with respect to an axis parallel to the forearm/hand. Analysis of inter-cycle variance showed consistently higher variance in a subspace that did not change the total force as compared to the variance that affected total force. We interpret the results as reflections of task-specific stability of the redundant multi-finger system. Large motor equivalent deviations suggest that reactions of the neuromotor system to a perturbation involve large changes of neural commands that do not affect salient performance variables, even during actions with the purpose to correct those salient variables. Consistency of the analyses of motor equivalence and variance analysis provides additional support for the idea of task-specific stability ensured at a neural level. PMID:25344311
Segmentation of the Knee for Analysis of Osteoarthritis
NASA Astrophysics Data System (ADS)
Zerfass, Peter; Museyko, Oleg; Bousson, Valérie; Laredo, Jean-Denis; Kalender, Willi A.; Engelke, Klaus
Osteoarthritis changes the load distribution within joints and also changes bone density and structure. Within typical timelines of clinical studies these changes can be very small. Therefore precise definition of evaluation regions which are highly robust and show little to no interand intra-operator variance are essential for high quality quantitative analysis. To achieve this goal we have developed a system for the definition of such regions with minimal user input.
Discrete-event system simulation on small and medium enterprises productivity improvement
NASA Astrophysics Data System (ADS)
Sulistio, J.; Hidayah, N. A.
2017-12-01
Small and medium industries in Indonesia is currently developing. The problem faced by SMEs is the difficulty of meeting growing demand coming into the company. Therefore, SME need an analysis and evaluation on its production process in order to meet all orders. The purpose of this research is to increase the productivity of SMEs production floor by applying discrete-event system simulation. This method preferred because it can solve complex problems die to the dynamic and stochastic nature of the system. To increase the credibility of the simulation, model validated by cooperating the average of two trials, two trials of variance and chi square test. Afterwards, Benferroni method applied to development several alternatives. The article concludes that, the productivity of SMEs production floor increased up to 50% by adding the capacity of dyeing and drying machines.
A hybrid (Monte Carlo/deterministic) approach for multi-dimensional radiation transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bal, Guillaume, E-mail: gb2030@columbia.edu; Davis, Anthony B., E-mail: Anthony.B.Davis@jpl.nasa.gov; Kavli Institute for Theoretical Physics, Kohn Hall, University of California, Santa Barbara, CA 93106-4030
2011-08-20
Highlights: {yields} We introduce a variance reduction scheme for Monte Carlo (MC) transport. {yields} The primary application is atmospheric remote sensing. {yields} The technique first solves the adjoint problem using a deterministic solver. {yields} Next, the adjoint solution is used as an importance function for the MC solver. {yields} The adjoint problem is solved quickly since it ignores the volume. - Abstract: A novel hybrid Monte Carlo transport scheme is demonstrated in a scene with solar illumination, scattering and absorbing 2D atmosphere, a textured reflecting mountain, and a small detector located in the sky (mounted on a satellite or amore » airplane). It uses a deterministic approximation of an adjoint transport solution to reduce variance, computed quickly by ignoring atmospheric interactions. This allows significant variance and computational cost reductions when the atmospheric scattering and absorption coefficient are small. When combined with an atmospheric photon-redirection scheme, significant variance reduction (equivalently acceleration) is achieved in the presence of atmospheric interactions.« less
The dispersion of age differences between partners and the asymptotic dynamics of the HIV epidemic.
d'Albis, Hippolyte; Augeraud-Véron, Emmanuelle; Djemai, Elodie; Ducrot, Arnaud
2012-01-01
In this paper, the effect of a change in the distribution of age differences between sexual partners on the dynamics of the HIV epidemic is studied. In a gender- and age-structured compartmental model, it is shown that if the variance of the distribution is small enough, an increase in this variance strongly increases the basic reproduction number. Moreover, if the variance is large enough, the mean age difference barely affects the basic reproduction number. We, therefore, conclude that the local stability of the disease-free equilibrium relies more on the variance than on the mean.
Brekke, Patricia; Ewen, John G; Clucas, Gemma; Santure, Anna W
2015-01-01
Floating males are usually thought of as nonbreeders. However, some floating individuals are able to reproduce through extra-pair copulations. Floater reproductive success can impact breeders’ sex ratio, reproductive variance, multiple paternity and inbreeding, particularly in small populations. Changes in reproductive variance alter the rate of genetic drift and loss of genetic diversity. Therefore, genetic management of threatened species requires an understanding of floater reproduction and determinants of floating behaviour to effectively conserve species. Here, we used a pedigreed, free-living population of the endangered New Zealand hihi (Notiomystis cincta) to assess variance in male reproductive success and test the genetic (inbreeding and heritability) and conditional (age and size) factors that influence floater behaviour and reproduction. Floater reproduction is common in this species. However, floater individuals have lower reproductive success and variance in reproductive success than territorial males (total and extra-pair fledglings), so their relative impact on the population's reproductive performance is low. Whether an individual becomes a floater, and if so then how successful they are, is determined mainly by individual age (young and old) and to lesser extents male size (small) and inbreeding level (inbred). Floating males have a small, but important role in population reproduction and persistence of threatened populations. PMID:26366197
Rogala, James T.; Gray, Brian R.
2006-01-01
The Long Term Resource Monitoring Program (LTRMP) uses a stratified random sampling design to obtain water quality statistics within selected study reaches of the Upper Mississippi River System (UMRS). LTRMP sampling strata are based on aquatic area types generally found in large rivers (e.g., main channel, side channel, backwater, and impounded areas). For hydrologically well-mixed strata (i.e., main channel), variance associated with spatial scales smaller than the strata scale is a relatively minor issue for many water quality parameters. However, analysis of LTRMP water quality data has shown that within-strata variability at the strata scale is high in off-channel areas (i.e., backwaters). A portion of that variability may be associated with differences among individual backwater lakes (i.e., small and large backwater regions separated by channels) that cumulatively make up the backwater stratum. The objective of the statistical modeling presented here is to determine if differences among backwater lakes account for a large portion of the variance observed in the backwater stratum for selected parameters. If variance associated with backwater lakes is high, then inclusion of backwater lake effects within statistical models is warranted. Further, lakes themselves may represent natural experimental units where associations of interest to management may be estimated.
The importance of system band broadening in modern size exclusion chromatography.
Goyon, Alexandre; Guillarme, Davy; Fekete, Szabolcs
2017-02-20
In the last few years, highly efficient UHP-SEC columns packed with sub-3μm particles were commercialized by several providers. Besides the particle size reduction, the dimensions of modern SEC stationary phases (150×4.6mm) was also modified compared to regular SEC columns (300×6 or 300×8mm). Because the analytes are excluded from the pores in SEC, the retention factors are very low, ranging from -1
NASA Astrophysics Data System (ADS)
Moster, Benjamin P.; Somerville, Rachel S.; Newman, Jeffrey A.; Rix, Hans-Walter
2011-04-01
Deep pencil beam surveys (<1 deg2) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by "cosmic variance." This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift \\bar{z} and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, \\bar{z}, Δz, and stellar mass m *. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at \\bar{z}=2 and with Δz = 0.5, the relative cosmic variance of galaxies with m *>1011 M sun is ~38%, while it is ~27% for GEMS and ~12% for COSMOS. For galaxies of m * ~ 1010 M sun, the relative cosmic variance is ~19% for GOODS, ~13% for GEMS, and ~6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at \\bar{z}=2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic variance is less serious.
Comments Regarding the Binary Power Law for Heterogeneity of Disease Incidence
USDA-ARS?s Scientific Manuscript database
The binary power law (BPL) has been successfully used to characterize heterogeneity (over dispersion or small-scale aggregation) of disease incidence for many plant pathosystems. With the BPL, the log of the observed variance is a linear function of the log of the theoretical variance for a binomial...
Adaptive Prior Variance Calibration in the Bayesian Continual Reassessment Method
Zhang, Jin; Braun, Thomas M.; Taylor, Jeremy M.G.
2012-01-01
Use of the Continual Reassessment Method (CRM) and other model-based approaches to design in Phase I clinical trials has increased due to the ability of the CRM to identify the maximum tolerated dose (MTD) better than the 3+3 method. However, the CRM can be sensitive to the variance selected for the prior distribution of the model parameter, especially when a small number of patients are enrolled. While methods have emerged to adaptively select skeletons and to calibrate the prior variance only at the beginning of a trial, there has not been any approach developed to adaptively calibrate the prior variance throughout a trial. We propose three systematic approaches to adaptively calibrate the prior variance during a trial and compare them via simulation to methods proposed to calibrate the variance at the beginning of a trial. PMID:22987660
Stratum variance estimation for sample allocation in crop surveys. [Great Plains Corridor
NASA Technical Reports Server (NTRS)
Perry, C. R., Jr.; Chhikara, R. S. (Principal Investigator)
1980-01-01
The problem of determining stratum variances needed in achieving an optimum sample allocation for crop surveys by remote sensing is investigated by considering an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical crop statistics is developed for obtaining initial estimates of tratum variances. The procedure is applied to estimate stratum variances for wheat in the U.S. Great Plains and is evaluated based on the numerical results thus obtained. It is shown that the proposed technique is viable and performs satisfactorily, with the use of a conservative value for the field size and the crop statistics from the small political subdivision level, when the estimated stratum variances were compared to those obtained using the LANDSAT data.
Reaction Event Counting Statistics of Biopolymer Reaction Systems with Dynamic Heterogeneity.
Lim, Yu Rim; Park, Seong Jun; Park, Bo Jung; Cao, Jianshu; Silbey, Robert J; Sung, Jaeyoung
2012-04-10
We investigate the reaction event counting statistics (RECS) of an elementary biopolymer reaction in which the rate coefficient is dependent on states of the biopolymer and the surrounding environment and discover a universal kinetic phase transition in the RECS of the reaction system with dynamic heterogeneity. From an exact analysis for a general model of elementary biopolymer reactions, we find that the variance in the number of reaction events is dependent on the square of the mean number of the reaction events when the size of measurement time is small on the relaxation time scale of rate coefficient fluctuations, which does not conform to renewal statistics. On the other hand, when the size of the measurement time interval is much greater than the relaxation time of rate coefficient fluctuations, the variance becomes linearly proportional to the mean reaction number in accordance with renewal statistics. Gillespie's stochastic simulation method is generalized for the reaction system with a rate coefficient fluctuation. The simulation results confirm the correctness of the analytic results for the time dependent mean and variance of the reaction event number distribution. On the basis of the obtained results, we propose a method of quantitative analysis for the reaction event counting statistics of reaction systems with rate coefficient fluctuations, which enables one to extract information about the magnitude and the relaxation times of the fluctuating reaction rate coefficient, without a bias that can be introduced by assuming a particular kinetic model of conformational dynamics and the conformation dependent reactivity. An exact relationship is established between a higher moment of the reaction event number distribution and the multitime correlation of the reaction rate for the reaction system with a nonequilibrium initial state distribution as well as for the system with the equilibrium initial state distribution.
Mesoscale Gravity Wave Variances from AMSU-A Radiances
NASA Technical Reports Server (NTRS)
Wu, Dong L.
2004-01-01
A variance analysis technique is developed here to extract gravity wave (GW) induced temperature fluctuations from NOAA AMSU-A (Advanced Microwave Sounding Unit-A) radiance measurements. By carefully removing the instrument/measurement noise, the algorithm can produce reliable GW variances with the minimum detectable value as small as 0.1 K2. Preliminary analyses with AMSU-A data show GW variance maps in the stratosphere have very similar distributions to those found with the UARS MLS (Upper Atmosphere Research Satellite Microwave Limb Sounder). However, the AMSU-A offers better horizontal and temporal resolution for observing regional GW variability, such as activity over sub-Antarctic islands.
tscvh R Package: Computational of the two samples test on microarray-sequencing data
NASA Astrophysics Data System (ADS)
Fajriyah, Rohmatul; Rosadi, Dedi
2017-12-01
We present a new R package, a tscvh (two samples cross-variance homogeneity), as we called it. This package is a software of the cross-variance statistical test which has been proposed and introduced by Fajriyah ([3] and [4]), based on the cross-variance concept. The test can be used as an alternative test for the significance difference between two means when sample size is small, the situation which is usually appeared in the bioinformatics research. Based on its statistical distribution, the p-value can be also provided. The package is built under a homogeneity of variance between samples.
Tisdall, M Dylan; Reuter, Martin; Qureshi, Abid; Buckner, Randy L; Fischl, Bruce; van der Kouwe, André J W
2016-02-15
Recent work has demonstrated that subject motion produces systematic biases in the metrics computed by widely used morphometry software packages, even when the motion is too small to produce noticeable image artifacts. In the common situation where the control population exhibits different behaviors in the scanner when compared to the experimental population, these systematic measurement biases may produce significant confounds for between-group analyses, leading to erroneous conclusions about group differences. While previous work has shown that prospective motion correction can improve perceived image quality, here we demonstrate that, in healthy subjects performing a variety of directed motions, the use of the volumetric navigator (vNav) prospective motion correction system significantly reduces the motion-induced bias and variance in morphometry. Copyright © 2015 Elsevier Inc. All rights reserved.
Isotope scattering and phonon thermal conductivity in light atom compounds: LiH and LiF
Lindsay, Lucas R.
2016-11-08
Engineered isotope variation is a pathway toward modulating lattice thermal conductivity (κ) of a material through changes in phonon-isotope scattering. The effects of isotope variation on intrinsic thermal resistance is little explored, as varying isotopes have relatively small differences in mass and thus do not affect bulk phonon dispersions. However, for light elements isotope mass variation can be relatively large (e.g., hydrogen and deuterium). Using a first principles Peierls-Boltzmann transport equation approach the effects of isotope variance on lattice thermal transport in ultra-low-mass compound materials LiH and LiF are characterized. The isotope mass variance modifies the intrinsic thermal resistance viamore » modulation of acoustic and optic phonon frequencies, while phonon-isotope scattering from mass disorder plays only a minor role. This leads to some unusual cases where values of isotopically pure systems ( 6LiH, 7Li 2H and 6LiF) are lower than the values from their counterparts with naturally occurring isotopes and phonon-isotope scattering. However, these differences are relatively small. The effects of temperature-driven lattice expansion on phonon dispersions and calculated κ are also discussed. This work provides insight into lattice thermal conductivity modulation with mass variation and the interplay of intrinsic phonon-phonon and phonon-isotope scattering in interesting light atom systems.« less
Würschum, Tobias; Langer, Simon M; Longin, C Friedrich H; Tucker, Matthew R; Leiser, Willmar L
2018-06-01
The broad adaptability of heading time has contributed to the global success of wheat in a diverse array of climatic conditions. Here, we investigated the genetic architecture underlying heading time in a large panel of 1,110 winter wheat cultivars of worldwide origin. Genome-wide association mapping, in combination with the analysis of major phenology loci, revealed a three-component system that facilitates the adaptation of heading time in winter wheat. The photoperiod sensitivity locus Ppd-D1 was found to account for almost half of the genotypic variance in this panel and can advance or delay heading by many days. In addition, copy number variation at Ppd-B1 was the second most important source of variation in heading, explaining 8.3% of the genotypic variance. Results from association mapping and genomic prediction indicated that the remaining variation is attributed to numerous small-effect quantitative trait loci that facilitate fine-tuning of heading to the local climatic conditions. Collectively, our results underpin the importance of the two Ppd-1 loci for the adaptation of heading time in winter wheat and illustrate how the three components have been exploited for wheat breeding globally. © 2018 John Wiley & Sons Ltd.
Reduction of variance in spectral estimates for correction of ultrasonic aberration.
Astheimer, Jeffrey P; Pilkington, Wayne C; Waag, Robert C
2006-01-01
A variance reduction factor is defined to describe the rate of convergence and accuracy of spectra estimated from overlapping ultrasonic scattering volumes when the scattering is from a spatially uncorrelated medium. Assuming that the individual volumes are localized by a spherically symmetric Gaussian window and that centers of the volumes are located on orbits of an icosahedral rotation group, the factor is minimized by adjusting the weight and radius of each orbit. Conditions necessary for the application of the variance reduction method, particularly for statistical estimation of aberration, are examined. The smallest possible value of the factor is found by allowing an unlimited number of centers constrained only to be within a ball rather than on icosahedral orbits. Computations using orbits formed by icosahedral vertices, face centers, and edge midpoints with a constraint radius limited to a small multiple of the Gaussian width show that a significant reduction of variance can be achieved from a small number of centers in the confined volume and that this reduction is nearly the maximum obtainable from an unlimited number of centers in the same volume.
Wright, George W; Simon, Richard M
2003-12-12
Microarray techniques provide a valuable way of characterizing the molecular nature of disease. Unfortunately expense and limited specimen availability often lead to studies with small sample sizes. This makes accurate estimation of variability difficult, since variance estimates made on a gene by gene basis will have few degrees of freedom, and the assumption that all genes share equal variance is unlikely to be true. We propose a model by which the within gene variances are drawn from an inverse gamma distribution, whose parameters are estimated across all genes. This results in a test statistic that is a minor variation of those used in standard linear models. We demonstrate that the model assumptions are valid on experimental data, and that the model has more power than standard tests to pick up large changes in expression, while not increasing the rate of false positives. This method is incorporated into BRB-ArrayTools version 3.0 (http://linus.nci.nih.gov/BRB-ArrayTools.html). ftp://linus.nci.nih.gov/pub/techreport/RVM_supplement.pdf
Why you cannot transform your way out of trouble for small counts.
Warton, David I
2018-03-01
While data transformation is a common strategy to satisfy linear modeling assumptions, a theoretical result is used to show that transformation cannot reasonably be expected to stabilize variances for small counts. Under broad assumptions, as counts get smaller, it is shown that the variance becomes proportional to the mean under monotonic transformations g(·) that satisfy g(0)=0, excepting a few pathological cases. A suggested rule-of-thumb is that if many predicted counts are less than one then data transformation cannot reasonably be expected to stabilize variances, even for a well-chosen transformation. This result has clear implications for the analysis of counts as often implemented in the applied sciences, but particularly for multivariate analysis in ecology. Multivariate discrete data are often collected in ecology, typically with a large proportion of zeros, and it is currently widespread to use methods of analysis that do not account for differences in variance across observations nor across responses. Simulations demonstrate that failure to account for the mean-variance relationship can have particularly severe consequences in this context, and also in the univariate context if the sampling design is unbalanced. © 2017 The Authors. Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.
Finger gnosis predicts a unique but small part of variance in initial arithmetic performance.
Wasner, Mirjam; Nuerk, Hans-Christoph; Martignon, Laura; Roesch, Stephanie; Moeller, Korbinian
2016-06-01
Recent studies indicated that finger gnosis (i.e., the ability to perceive and differentiate one's own fingers) is associated reliably with basic numerical competencies. In this study, we aimed at examining whether finger gnosis is also a unique predictor for initial arithmetic competencies at the beginning of first grade-and thus before formal math instruction starts. Therefore, we controlled for influences of domain-specific numerical precursor competencies, domain-general cognitive ability, and natural variables such as gender and age. Results from 321 German first-graders revealed that finger gnosis indeed predicted a unique and relevant but nevertheless only small part of the variance in initial arithmetic performance (∼1%-2%) as compared with influences of general cognitive ability and numerical precursor competencies. Taken together, these results substantiated the notion of a unique association between finger gnosis and arithmetic and further corroborate the theoretical idea of finger-based representations contributing to numerical cognition. However, the only small part of variance explained by finger gnosis seems to limit its relevance for diagnostic purposes. Copyright © 2016. Published by Elsevier Inc.
Time-Frequency Variability of Kuroshio Meanders in Tokara Strait
NASA Astrophysics Data System (ADS)
Nakamura, H.; Yamashiro, T.; Nishina, A.; Ichikawa, H.
2006-12-01
The Kuroshio path in the northern Okinawa Trough, Japan, located between the continental slope and Tokara Strait, exhibits meandering motions with largest displacements in the East China Sea; these motions have dominant periods in the broad range of 30-90 days. Understanding the dynamic nature of such meanders is crucial to predicting small and large meanders of the Kuroshio path off the south coast of Japan. Previous numerical simulations suggest that the Kuroshio path meanders in the northern Okinawa Trough become nonstationary in variance because of changes in background states of the Kuroshio in the northern Okinawa Trough, but a detailed analysis based on observed data has yet to be performed. The purpose of the present study is to provide a detailed description of the time-frequency variability of Kuroshio path meanders observed in Tokara Strait. Three Kuroshio indicators were subjected to wavelet analysis for the period 1984-2004: the Kuroshio Position Index (KPI) in Tokara Strait, Kuroshio Volume Transport (KVT) in Tokara Strait, and the basal current velocity of the Kuroshio on the continental slope in the northern Okinawa Trough. The 30-90 day variance of the KPI shows a season-fixed nature, with larger amplitudes in the period December-July. The amplitude of the variance in this phenomenon is also modulated by interannual variations, with small variance recorded during 1989-1992, large variance during 1993-1998, and a return to small variance from 1999-2003. This interannual variation is positively correlated with that of the KVT. The largest variance of the KPI during February-April precedes the largest volume transport in April-May by about 1 month, suggesting that eddy vorticity flux strengthens the mean current field. Previous numerical simulations reproduce the recirculation gyre as a cyclonic eddy in the area between the continental slope and Tokara Strait; this gyre is analogous to the northern recirculation gyre associated with the eastward jet. On the basis of data from a moored current-meter situated on the continental slope, the genesis of the 30-90 day meanders within Tokara Strait is ascribed to nonlinear energy transfer from 8-25 day meanders on the continental slope.
NASA Technical Reports Server (NTRS)
Chhikara, R. S.; Perry, C. R., Jr. (Principal Investigator)
1980-01-01
The problem of determining the stratum variances required for an optimum sample allocation for remotely sensed crop surveys is investigated with emphasis on an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical statistics is developed for obtaining initial estimates of stratum variances. The procedure is applied to variance for wheat in the U.S. Great Plains and is evaluated based on the numerical results obtained. It is shown that the proposed technique is viable and performs satisfactorily with the use of a conservative value (smaller than the expected value) for the field size and with the use of crop statistics from the small political division level.
Lin, J. Z.; Ritland, K.
1997-01-01
Theoretical predictions about the evolution of selfing depend on the genetic architecture of loci controlling selfing (monogenic vs. polygenic determination, large vs. small effect of alleles, dominance vs. recessiveness), and studies of such architecture are lacking. We inferred the genetic basis of mating system differences between the outbreeding Mimulus guttatus and the inbreeding M. platycalyx by quantitative trait locus (QTL) mapping using random amplified polymorphic DNA and isozyme markers. One to three QTL were detected for each of five mating system characters, and each QTL explained 7.6-28.6% of the phenotypic variance. Taken together, QTL accounted for up to 38% of the variation in mating system characters, and a large proportion of variation was unaccounted for. Inferred QTL often affected more than one trait, contributing to the genetic correlation between those traits. These results are consistent with the hypothesis that quantitative variation in plant mating system characters is primarily controlled by loci with small effect. PMID:9215912
Uechi, Ken; Asakura, Keiko; Masayasu, Shizuko; Sasaki, Satoshi
2017-06-01
Salt intake in Japan remains high; therefore, exploring within-country variation in salt intake and its cause is an important step in the establishment of salt reduction strategies. However, no nationwide evaluation of this variation has been conducted by urinalysis. We aimed to clarify whether within-country variation in salt intake exists in Japan after adjusting for individual characteristics. Healthy men (n=1027) and women (n=1046) aged 20-69 years were recruited from all 47 prefectures of Japan. Twenty-four-hour sodium excretion was estimated using three spot urine samples collected on three nonconsecutive days. The study area was categorized into 12 regions defined by the National Health and Nutrition Survey Japan. Within-country variation in sodium excretion was estimated as a population (region)-level variance using a multilevel model with random intercepts, with adjustment for individual biological, socioeconomic and dietary characteristics. Estimated 24 h sodium excretion was 204.8 mmol per day in men and 155.7 mmol per day in women. Sodium excretion was high in the Northeastern region. However, population-level variance was extremely small after adjusting for individual characteristics (0.8 and 2% of overall variance in men and women, respectively) compared with individual-level variance (99.2 and 98% of overall variance in men and women, respectively). Among individual characteristics, greater body mass index, living with a spouse and high miso-soup intake were associated with high sodium excretion in both sexes. Within-country variation in salt intake in Japan was extremely small compared with individual-level variation. Salt reduction strategies for Japan should be comprehensive and should not address the small within-country differences in intake.
Systems, Subjects, Sessions: To What Extent Do These Factors Influence EEG Data?
Melnik, Andrew; Legkov, Petr; Izdebski, Krzysztof; Kärcher, Silke M; Hairston, W David; Ferris, Daniel P; König, Peter
2017-01-01
Lab-based electroencephalography (EEG) techniques have matured over decades of research and can produce high-quality scientific data. It is often assumed that the specific choice of EEG system has limited impact on the data and does not add variance to the results. However, many low cost and mobile EEG systems are now available, and there is some doubt as to the how EEG data vary across these newer systems. We sought to determine how variance across systems compares to variance across subjects or repeated sessions. We tested four EEG systems: two standard research-grade systems, one system designed for mobile use with dry electrodes, and an affordable mobile system with a lower channel count. We recorded four subjects three times with each of the four EEG systems. This setup allowed us to assess the influence of all three factors on the variance of data. Subjects performed a battery of six short standard EEG paradigms based on event-related potentials (ERPs) and steady-state visually evoked potential (SSVEP). Results demonstrated that subjects account for 32% of the variance, systems for 9% of the variance, and repeated sessions for each subject-system combination for 1% of the variance. In most lab-based EEG research, the number of subjects per study typically ranges from 10 to 20, and error of uncertainty in estimates of the mean (like ERP) will improve by the square root of the number of subjects. As a result, the variance due to EEG system (9%) is of the same order of magnitude as variance due to subjects (32%/sqrt(16) = 8%) with a pool of 16 subjects. The two standard research-grade EEG systems had no significantly different means from each other across all paradigms. However, the two other EEG systems demonstrated different mean values from one or both of the two standard research-grade EEG systems in at least half of the paradigms. In addition to providing specific estimates of the variability across EEG systems, subjects, and repeated sessions, we also propose a benchmark to evaluate new mobile EEG systems by means of ERP responses.
Systems, Subjects, Sessions: To What Extent Do These Factors Influence EEG Data?
Melnik, Andrew; Legkov, Petr; Izdebski, Krzysztof; Kärcher, Silke M.; Hairston, W. David; Ferris, Daniel P.; König, Peter
2017-01-01
Lab-based electroencephalography (EEG) techniques have matured over decades of research and can produce high-quality scientific data. It is often assumed that the specific choice of EEG system has limited impact on the data and does not add variance to the results. However, many low cost and mobile EEG systems are now available, and there is some doubt as to the how EEG data vary across these newer systems. We sought to determine how variance across systems compares to variance across subjects or repeated sessions. We tested four EEG systems: two standard research-grade systems, one system designed for mobile use with dry electrodes, and an affordable mobile system with a lower channel count. We recorded four subjects three times with each of the four EEG systems. This setup allowed us to assess the influence of all three factors on the variance of data. Subjects performed a battery of six short standard EEG paradigms based on event-related potentials (ERPs) and steady-state visually evoked potential (SSVEP). Results demonstrated that subjects account for 32% of the variance, systems for 9% of the variance, and repeated sessions for each subject-system combination for 1% of the variance. In most lab-based EEG research, the number of subjects per study typically ranges from 10 to 20, and error of uncertainty in estimates of the mean (like ERP) will improve by the square root of the number of subjects. As a result, the variance due to EEG system (9%) is of the same order of magnitude as variance due to subjects (32%/sqrt(16) = 8%) with a pool of 16 subjects. The two standard research-grade EEG systems had no significantly different means from each other across all paradigms. However, the two other EEG systems demonstrated different mean values from one or both of the two standard research-grade EEG systems in at least half of the paradigms. In addition to providing specific estimates of the variability across EEG systems, subjects, and repeated sessions, we also propose a benchmark to evaluate new mobile EEG systems by means of ERP responses. PMID:28424600
ERIC Educational Resources Information Center
Thompson, Bruce
The relationship between analysis of variance (ANOVA) methods and their analogs (analysis of covariance and multiple analyses of variance and covariance--collectively referred to as OVA methods) and the more general analytic case is explored. A small heuristic data set is used, with a hypothetical sample of 20 subjects, randomly assigned to five…
Portfolio of automated trading systems: complexity and learning set size issues.
Raudys, Sarunas
2013-03-01
In this paper, we consider using profit/loss histories of multiple automated trading systems (ATSs) as N input variables in portfolio management. By means of multivariate statistical analysis and simulation studies, we analyze the influences of sample size (L) and input dimensionality on the accuracy of determining the portfolio weights. We find that degradation in portfolio performance due to inexact estimation of N means and N(N - 1)/2 correlations is proportional to N/L; however, estimation of N variances does not worsen the result. To reduce unhelpful sample size/dimensionality effects, we perform a clustering of N time series and split them into a small number of blocks. Each block is composed of mutually correlated ATSs. It generates an expert trading agent based on a nontrainable 1/N portfolio rule. To increase the diversity of the expert agents, we use training sets of different lengths for clustering. In the output of the portfolio management system, the regularized mean-variance framework-based fusion agent is developed in each walk-forward step of an out-of-sample portfolio validation experiment. Experiments with the real financial data (2003-2012) confirm the effectiveness of the suggested approach.
Hand synergies during reach-to-grasp.
Mason, C R; Gomez, J E; Ebner, T J
2001-12-01
An emerging viewpoint is that the CNS uses synergies to simplify the control of the hand. Previous work has shown that static hand postures for mimed grasps can be described by a few principal components in which the higher order components explained only a small fraction of the variance yet provided meaningful information. Extending that earlier work, this study addressed whether the entire act of grasp can be described by a small number of postural synergies and whether these synergies are similar for different grasps. Five right-handed adults performed five types of reach-to-grasps including power grasp, power grasp with a lift, precision grasp, and mimed power grasp and mimed precision grasp of 16 different objects. The object shapes were cones, cylinders, and spindles, systematically varied in size to produce a large range of finger joint angle combinations. Three-dimensional reconstructions of 21 positions on the hand and wrist throughout the reach-to-grasp were obtained using a four-camera video system. Singular value decomposition on the temporal sequence of the marker positions was used to identify the common patterns ("eigenpostures") across the 16 objects for each task and their weightings as a function of time. The first eigenposture explained an average of 97.3 +/- 0.89% (mean +/- SD) of the variance of the hand shape, and the second another 1.9 +/- 0.85%. The first eigenposture was characterized by an open hand configuration that opens and closes during reach. The second eigenposture contributed to the control of the thumb and long fingers, particularly in the opening of the hand during the reach and the closing in preparation for object grasp. The eigenpostures and their temporal evolutions were similar across subjects and grasps. The higher order eigenpostures, although explaining only small amounts of the variance, contributed to the movements of the fingers and thumb. These findings suggest that much of reach-to-grasp is effected using a base posture with refinements in finger and thumb positions added in time to yield unique hand shapes.
ERIC Educational Resources Information Center
Nowell, Amy; Hedges, Larry V.
1998-01-01
Uses evidence from seven surveys of the U.S. 12th-grade population and the National Assessment of Educational Progress to show that gender differences in mean and variance in academic achievement are small from 1960 to 1994 but that differences in extreme scores are often substantial. (SLD)
Special Effects: Antenna Wetting, Short Distance Diversity and Depolarization
NASA Technical Reports Server (NTRS)
Acosta, Roberto J.
2000-01-01
The Advanced Communication Technology Satellite (ACTS) communications system operates in the Ka frequency band. ACTS uses multiple, hopping, narrow beams and very small aperture terminal (VSAT) technology to establish a system availability of 99.5% for bit-error-rates of 5 x 10(exp -7) Or better over the continental United States. In order maintain this minimum system availability in all US rain zones, ACTS uses an adaptive rain fade compensation protocol to reduce the impact of signal attenuation resulting from propagation effects. The purpose of this paper is to present the results of system and sub-system characterizations considering the statistical effects of system variances due to antenna wetting and depolarization effects. In addition the availability enhancements using short distance diversity in a sub-tropical rain zone are investigated.
Economic analysis of small wind-energy conversion systems
NASA Astrophysics Data System (ADS)
Haack, B. N.
1982-05-01
A computer simulation was developed for evaluating the economics of small wind energy conversion systems (SWECS). Input parameters consisted of initial capital investment, maintenance and operating costs, the cost of electricity from other sources, and the yield of electricity. Capital costs comprised the generator, tower, necessity for an inverter and/or storage batteries, and installation, in addition to interest on loans. Wind data recorded every three hours for one year in Detroit, MI was employed with a 0.16 power coefficient to extrapolate up to hub height as an example, along with 10 yr of use variances. A maximum return on investment was found to reside in using all the energy produced on site, rather than selling power to the utility. It is concluded that, based on a microeconomic analysis, SWECS are economically viable at present only where electric rates are inordinately high, such as in remote regions or on islands.
Refractory pulse counting processes in stochastic neural computers.
McNeill, Dean K; Card, Howard C
2005-03-01
This letter quantitiatively investigates the effect of a temporary refractory period or dead time in the ability of a stochastic Bernoulli processor to record subsequent pulse events, following the arrival of a pulse. These effects can arise in either the input detectors of a stochastic neural network or in subsequent processing. A transient period is observed, which increases with both the dead time and the Bernoulli probability of the dead-time free system, during which the system reaches equilibrium. Unless the Bernoulli probability is small compared to the inverse of the dead time, the mean and variance of the pulse count distributions are both appreciably reduced.
Wonnapinij, Passorn; Chinnery, Patrick F.; Samuels, David C.
2010-01-01
In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference. PMID:20362273
ERIC Educational Resources Information Center
Neel, John H.; Stallings, William M.
An influential statistics test recommends a Levene text for homogeneity of variance. A recent note suggests that Levene's test is upwardly biased for small samples. Another report shows inflated Alpha estimates and low power. Neither study utilized more than two sample sizes. This Monte Carlo study involved sampling from a normal population for…
Performance of chromatographic systems to model soil-water sorption.
Hidalgo-Rodríguez, Marta; Fuguet, Elisabet; Ràfols, Clara; Rosés, Martí
2012-08-24
A systematic approach for evaluating the goodness of chromatographic systems to model the sorption of neutral organic compounds by soil from water is presented in this work. It is based on the examination of the three sources of error that determine the overall variance obtained when soil-water partition coefficients are correlated against chromatographic retention factors: the variance of the soil-water sorption data, the variance of the chromatographic data, and the variance attributed to the dissimilarity between the two systems. These contributions of variance are easily predicted through the characterization of the systems by the solvation parameter model. According to this method, several chromatographic systems besides the reference octanol-water partition system have been selected to test their performance in the emulation of soil-water sorption. The results from the experimental correlations agree with the predicted variances. The high-performance liquid chromatography system based on an immobilized artificial membrane and the micellar electrokinetic chromatography systems of sodium dodecylsulfate and sodium taurocholate provide the most precise correlation models. They have shown to predict well soil-water sorption coefficients of several tested herbicides. Octanol-water partitions and high-performance liquid chromatography measurements using C18 columns are less suited for the estimation of soil-water partition coefficients. Copyright © 2012 Elsevier B.V. All rights reserved.
The spatial structure and temporal synchrony of water quality in stream networks
NASA Astrophysics Data System (ADS)
Abbott, Benjamin; Gruau, Gerard; Zarneske, Jay; Barbe, Lou; Gu, Sen; Kolbe, Tamara; Thomas, Zahra; Jaffrezic, Anne; Moatar, Florentina; Pinay, Gilles
2017-04-01
To feed nine billion people in 2050 while maintaining viable aquatic ecosystems will require an understanding of nutrient pollution dynamics throughout stream networks. Most regulatory frameworks such as the European Water Framework Directive and U.S. Clean Water Act, focus on nutrient concentrations in medium to large rivers. This strategy is appealing because large rivers integrate many small catchments and total nutrient loads drive eutrophication in estuarine and oceanic ecosystems. However, there is growing evidence that to understand and reduce downstream nutrient fluxes we need to look upstream. While headwater streams receive the bulk of nutrients in river networks, the relationship between land cover and nutrient flux often breaks down for small catchments, representing an important ecological unknown since 90% of global stream length occurs in catchments smaller than 15 km2. Though continuous monitoring of thousands of small streams is not feasible, what if we could learn what we needed about where and when to implement monitoring and conservation efforts with periodic sampling of headwater catchments? To address this question we performed repeat synoptic sampling of 56 nested catchments ranging in size from 1 to 370 km2 in western France. Spatial variability in carbon and nutrient concentrations decreased non-linearly as catchment size increased, with thresholds in variance for organic carbon and nutrients occurring between 36 and 68 km2. While it is widely held that temporal variance is higher in smaller streams, we observed consistent temporal variance across spatial scales and the ranking of catchments based on water quality showed strong synchrony in the water chemistry response to seasonal variation and hydrological events. We used these observations to develop two simple management frameworks. The subcatchment leverage concept proposes that mitigation and restoration efforts are more likely to succeed when implemented at spatial scales expressing high variability in the target parameter, which indicates decreased system inertia and demonstrates that alternative system responses are possible. The subcatchment synchrony concept suggests that periodic sampling of headwaters can provide valuable information about pollutant sources and inherent resilience in subcatchments and that if agricultural activity were redistributed based on this assessment of catchment vulnerability to nutrient loading, water quality could be improved while maintaining crop yields.
Effect of correlated observation error on parameters, predictions, and uncertainty
Tiedeman, Claire; Green, Christopher T.
2013-01-01
Correlations among observation errors are typically omitted when calculating observation weights for model calibration by inverse methods. We explore the effects of omitting these correlations on estimates of parameters, predictions, and uncertainties. First, we develop a new analytical expression for the difference in parameter variance estimated with and without error correlations for a simple one-parameter two-observation inverse model. Results indicate that omitting error correlations from both the weight matrix and the variance calculation can either increase or decrease the parameter variance, depending on the values of error correlation (ρ) and the ratio of dimensionless scaled sensitivities (rdss). For small ρ, the difference in variance is always small, but for large ρ, the difference varies widely depending on the sign and magnitude of rdss. Next, we consider a groundwater reactive transport model of denitrification with four parameters and correlated geochemical observation errors that are computed by an error-propagation approach that is new for hydrogeologic studies. We compare parameter estimates, predictions, and uncertainties obtained with and without the error correlations. Omitting the correlations modestly to substantially changes parameter estimates, and causes both increases and decreases of parameter variances, consistent with the analytical expression. Differences in predictions for the models calibrated with and without error correlations can be greater than parameter differences when both are considered relative to their respective confidence intervals. These results indicate that including observation error correlations in weighting for nonlinear regression can have important effects on parameter estimates, predictions, and their respective uncertainties.
NASA Astrophysics Data System (ADS)
Amiri-Simkooei, A. R.
2018-01-01
Three-dimensional (3D) coordinate transformations, generally consisting of origin shifts, axes rotations, scale changes, and skew parameters, are widely used in many geomatics applications. Although in some geodetic applications simplified transformation models are used based on the assumption of small transformation parameters, in other fields of applications such parameters are indeed large. The algorithms of two recent papers on the weighted total least-squares (WTLS) problem are used for the 3D coordinate transformation. The methodology can be applied to the case when the transformation parameters are generally large of which no approximate values of the parameters are required. Direct linearization of the rotation and scale parameters is thus not required. The WTLS formulation is employed to take into consideration errors in both the start and target systems on the estimation of the transformation parameters. Two of the well-known 3D transformation methods, namely affine (12, 9, and 8 parameters) and similarity (7 and 6 parameters) transformations, can be handled using the WTLS theory subject to hard constraints. Because the method can be formulated by the standard least-squares theory with constraints, the covariance matrix of the transformation parameters can directly be provided. The above characteristics of the 3D coordinate transformation are implemented in the presence of different variance components, which are estimated using the least squares variance component estimation. In particular, the estimability of the variance components is investigated. The efficacy of the proposed formulation is verified on two real data sets.
MAP Reconstruction for Fourier Rebinned TOF-PET Data
Bai, Bing; Lin, Yanguang; Zhu, Wentao; Ren, Ran; Li, Quanzheng; Dahlbom, Magnus; DiFilippo, Frank; Leahy, Richard M.
2014-01-01
Time-of-flight (TOF) information improves signal to noise ratio in Positron Emission Tomography (PET). Computation cost in processing TOF-PET sinograms is substantially higher than for nonTOF data because the data in each line of response is divided among multiple time of flight bins. This additional cost has motivated research into methods for rebinning TOF data into lower dimensional representations that exploit redundancies inherent in TOF data. We have previously developed approximate Fourier methods that rebin TOF data into either 3D nonTOF or 2D nonTOF formats. We refer to these methods respectively as FORET-3D and FORET-2D. Here we describe maximum a posteriori (MAP) estimators for use with FORET rebinned data. We first derive approximate expressions for the variance of the rebinned data. We then use these results to rescale the data so that the variance and mean are approximately equal allowing us to use the Poisson likelihood model for MAP reconstruction. MAP reconstruction from these rebinned data uses a system matrix in which the detector response model accounts for the effects of rebinning. Using these methods we compare performance of FORET-2D and 3D with TOF and nonTOF reconstructions using phantom and clinical data. Our phantom results show a small loss in contrast recovery at matched noise levels using FORET compared to reconstruction from the original TOF data. Clinical examples show FORET images that are qualitatively similar to those obtained from the original TOF-PET data but a small increase in variance at matched resolution. Reconstruction time is reduced by a factor of 5 and 30 using FORET3D+MAP and FORET2D+MAP respectively compared to 3D TOF MAP, which makes these methods attractive for clinical applications. PMID:24504374
Colbert-Getz, Jorie M; Tackett, Sean; Wright, Scott M; Shochet, Robert S
2016-08-28
This study was conducted to characterize the relative strength of associations of learning environment perception with academic performance and with personal growth. In 2012-2014 second and third year students at Johns Hopkins University School of Medicine completed a learning environment survey and personal growth scale. Hierarchical linear regression analysis was employed to determine if the proportion of variance in learning environment scores accounted for by personal growth was significantly larger than the proportion accounted for by academic performance (course/clerkship grades). The proportion of variance in learning environment scores accounted for by personal growth was larger than the proportion accounted for by academic performance in year 2 [R(2)Δ of 0.09, F(1,175) = 14.99, p < .001] and year 3 [R(2)Δ of 0.28, F(1,169) = 76.80, p < .001]. Learning environment scores shared a small amount of variance with academic performance in years 2 and 3. The amount of variance between learning environment scores and personal growth was small in year 2 and large in year 3. Since supportive learning environments are essential for medical education, future work must determine if enhancing personal growth prior to and during the clerkship year will increase learning environment perception.
Ward, P. J.
1990-01-01
Recent developments have related quantitative trait expression to metabolic flux. The present paper investigates some implications of this for statistical aspects of polygenic inheritance. Expressions are derived for the within-sibship genetic mean and genetic variance of metabolic flux given a pair of parental, diploid, n-locus genotypes. These are exact and hold for arbitrary numbers of gene loci, arbitrary allelic values at each locus, and for arbitrary recombination fractions between adjacent gene loci. The within-sibship, genetic variance is seen to be simply a measure of parental heterozygosity plus a measure of the degree of linkage coupling within the parental genotypes. Approximations are given for the within-sibship phenotypic mean and variance of metabolic flux. These results are applied to the problem of attaining adequate statistical power in a test of association between allozymic variation and inter-individual variation in metabolic flux. Simulations indicate that statistical power can be greatly increased by augmenting the data with predictions and observations on progeny statistics in relation to parental allozyme genotypes. Adequate power may thus be attainable at small sample sizes, and when allozymic variation is scored at a only small fraction of the total set of loci whose catalytic products determine the flux. PMID:2379825
Tackett, Sean; Wright, Scott M.; Shochet, Robert S.
2016-01-01
Objectives This study was conducted to characterize the relative strength of associations of learning environment perception with academic performance and with personal growth. Methods In 2012-2014 second and third year students at Johns Hopkins University School of Medicine completed a learning environment survey and personal growth scale. Hierarchical linear regression analysis was employed to determine if the proportion of variance in learning environment scores accounted for by personal growth was significantly larger than the proportion accounted for by academic performance (course/clerkship grades). Results The proportion of variance in learning environment scores accounted for by personal growth was larger than the proportion accounted for by academic performance in year 2 [R2Δ of 0.09, F(1,175) = 14.99, p < .001] and year 3 [R2Δ of 0.28, F(1,169) = 76.80, p < .001]. Learning environment scores shared a small amount of variance with academic performance in years 2 and 3. The amount of variance between learning environment scores and personal growth was small in year 2 and large in year 3. Conclusions Since supportive learning environments are essential for medical education, future work must determine if enhancing personal growth prior to and during the clerkship year will increase learning environment perception. PMID:27570912
Squeezing and its graphical representations in the anharmonic oscillator model
NASA Astrophysics Data System (ADS)
Tanaś, R.; Miranowicz, A.; Kielich, S.
1991-04-01
The problem of squeezing and its graphical representations in the anharmonic oscillator model is considered. Explicit formulas for squeezing, principal squeezing, and the quasiprobability distribution (QPD) function are given and illustrated graphically. Approximate analytical formulas for the variances, extremal variances, and QPD are obtained for the case of small nonlinearities and large numbers of photons. The possibility of almost perfect squeezing in the model is demonstrated and its graphical representations in the form of variance lemniscates and QPD contours are plotted. For large numbers of photons the crescent shape of the QPD contours is hardly visible and quite regular ellipses are obtained.
An internal pilot design for prospective cancer screening trials with unknown disease prevalence.
Brinton, John T; Ringham, Brandy M; Glueck, Deborah H
2015-10-13
For studies that compare the diagnostic accuracy of two screening tests, the sample size depends on the prevalence of disease in the study population, and on the variance of the outcome. Both parameters may be unknown during the design stage, which makes finding an accurate sample size difficult. To solve this problem, we propose adapting an internal pilot design. In this adapted design, researchers will accrue some percentage of the planned sample size, then estimate both the disease prevalence and the variances of the screening tests. The updated estimates of the disease prevalence and variance are used to conduct a more accurate power and sample size calculation. We demonstrate that in large samples, the adapted internal pilot design produces no Type I inflation. For small samples (N less than 50), we introduce a novel adjustment of the critical value to control the Type I error rate. We apply the method to two proposed prospective cancer screening studies: 1) a small oral cancer screening study in individuals with Fanconi anemia and 2) a large oral cancer screening trial. Conducting an internal pilot study without adjusting the critical value can cause Type I error rate inflation in small samples, but not in large samples. An internal pilot approach usually achieves goal power and, for most studies with sample size greater than 50, requires no Type I error correction. Further, we have provided a flexible and accurate approach to bound Type I error below a goal level for studies with small sample size.
Leadership and Organizational Learning: Accounting for Variances in Small-Size Business Enterprises
ERIC Educational Resources Information Center
Graham, Carroll M.; Nafukho, Fredrick M.
2007-01-01
This study's primary purpose was to determine the relationship between leadership and the dependent variable organizational learning readiness at three locations of a small-size business enterprise in the Mid-Western United States. Surveys were acquired within an exploratory correlational research design and the results indicated leadership…
VARIANCE ANISOTROPY IN KINETIC PLASMAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parashar, Tulasi N.; Matthaeus, William H.; Oughton, Sean
Solar wind fluctuations admit well-documented anisotropies of the variance matrix, or polarization, related to the mean magnetic field direction. Typically, one finds a ratio of perpendicular variance to parallel variance of the order of 9:1 for the magnetic field. Here we study the question of whether a kinetic plasma spontaneously generates and sustains parallel variances when initiated with only perpendicular variance. We find that parallel variance grows and saturates at about 5% of the perpendicular variance in a few nonlinear times irrespective of the Reynolds number. For sufficiently large systems (Reynolds numbers) the variance approaches values consistent with the solarmore » wind observations.« less
Heritability of female extra-pair paternity rate in song sparrows (Melospiza melodia)
Reid, Jane M.; Arcese, Peter; Sardell, Rebecca J.; Keller, Lukas F.
2011-01-01
The forces driving the evolution of extra-pair reproduction in socially monogamous animals remain widely debated and unresolved. One key hypothesis is that female extra-pair reproduction evolves through indirect genetic benefits, reflecting increased additive genetic value of extra-pair offspring. Such evolution requires that a female's propensity to produce offspring that are sired by an extra-pair male is heritable. However, additive genetic variance and heritability in female extra-pair paternity (EPP) rate have not been quantified, precluding accurate estimation of the force of indirect selection. Sixteen years of comprehensive paternity and pedigree data from socially monogamous but genetically polygynandrous song sparrows (Melospiza melodia) showed significant additive genetic variance and heritability in the proportion of a female's offspring that was sired by an extra-pair male, constituting major components of the genetic architecture required for extra-pair reproduction to evolve through indirect additive genetic benefits. However, estimated heritabilities were moderately small (0.12 and 0.18 on the observed and underlying latent scales, respectively). The force of selection on extra-pair reproduction through indirect additive genetic benefits may consequently be relatively weak. However, the additive genetic variance and non-zero heritability observed in female EPP rate allow for multiple further genetic mechanisms to drive and constrain mating system evolution. PMID:20980302
Crow, James F
2008-12-01
Although molecular methods, such as QTL mapping, have revealed a number of loci with large effects, it is still likely that the bulk of quantitative variability is due to multiple factors, each with small effect. Typically, these have a large additive component. Conventional wisdom argues that selection, natural or artificial, uses up additive variance and thus depletes its supply. Over time, the variance should be reduced, and at equilibrium be near zero. This is especially expected for fitness and traits highly correlated with it. Yet, populations typically have a great deal of additive variance, and do not seem to run out of genetic variability even after many generations of directional selection. Long-term selection experiments show that populations continue to retain seemingly undiminished additive variance despite large changes in the mean value. I propose that there are several reasons for this. (i) The environment is continually changing so that what was formerly most fit no longer is. (ii) There is an input of genetic variance from mutation, and sometimes from migration. (iii) As intermediate-frequency alleles increase in frequency towards one, producing less variance (as p --> 1, p(1 - p) --> 0), others that were originally near zero become more common and increase the variance. Thus, a roughly constant variance is maintained. (iv) There is always selection for fitness and for characters closely related to it. To the extent that the trait is heritable, later generations inherit a disproportionate number of genes acting additively on the trait, thus increasing genetic variance. For these reasons a selected population retains its ability to evolve. Of course, genes with large effect are also important. Conspicuous examples are the small number of loci that changed teosinte to maize, and major phylogenetic changes in the animal kingdom. The relative importance of these along with duplications, chromosome rearrangements, horizontal transmission and polyploidy is yet to be determined. It is likely that only a case-by-case analysis will provide the answers. Despite the difficulties that complex interactions cause for evolution in Mendelian populations, such populations nevertheless evolve very well. Longlasting species must have evolved mechanisms for coping with such problems. Since such difficulties do not arise in asexual populations, a comparison of epistatic patterns in closely related sexual and asexual species might provide some important insights.
Modeling the subfilter scalar variance for large eddy simulation in forced isotropic turbulence
NASA Astrophysics Data System (ADS)
Cheminet, Adam; Blanquart, Guillaume
2011-11-01
Static and dynamic model for the subfilter scalar variance in homogeneous isotropic turbulence are investigated using direct numerical simulations (DNS) of a lineary forced passive scalar field. First, we introduce a new scalar forcing technique conditioned only on the scalar field which allows the fluctuating scalar field to reach a statistically stationary state. Statistical properties, including 2nd and 3rd statistical moments, spectra, and probability density functions of the scalar field have been analyzed. Using this technique, we performed constant density and variable density DNS of scalar mixing in isotropic turbulence. The results are used in an a-priori study of scalar variance models. Emphasis is placed on further studying the dynamic model introduced by G. Balarac, H. Pitsch and V. Raman [Phys. Fluids 20, (2008)]. Scalar variance models based on Bedford and Yeo's expansion are accurate for small filter width but errors arise in the inertial subrange. Results suggest that a constant coefficient computed from an assumed Kolmogorov spectrum is often sufficient to predict the subfilter scalar variance.
Yura, Harold T; Fields, Renny A
2011-06-20
Level crossing statistics is applied to the complex problem of atmospheric turbulence-induced beam wander for laser propagation from ground to space. A comprehensive estimate of the single-axis wander angle temporal autocorrelation function and the corresponding power spectrum is used to develop, for the first time to our knowledge, analytic expressions for the mean angular level crossing rate and the mean duration of such crossings. These results are based on an extension and generalization of a previous seminal analysis of the beam wander variance by Klyatskin and Kon. In the geometrical optics limit, we obtain an expression for the beam wander variance that is valid for both an arbitrarily shaped initial beam profile and transmitting aperture. It is shown that beam wander can disrupt bidirectional ground-to-space laser communication systems whose small apertures do not require adaptive optics to deliver uniform beams at their intended target receivers in space. The magnitude and rate of beam wander is estimated for turbulence profiles enveloping some practical laser communication deployment options and suggesting what level of beam wander effects must be mitigated to demonstrate effective bidirectional laser communication systems.
Crespo, Patricia; Jiménez, Juan E; Rodríguez, Cristina; Baker, Doris; Park, Yonghan
2018-03-09
The present study compares the patterns of growth of beginning reading skills (i.e., phonemic awareness, phonics, fluency, vocabulary and comprehension) of Spanish speaking monolingual students who received a Tier 2 reading intervention with students who did not receive the intervention. All the students in grades K-2 were screened at the beginning of the year to confirm their risk status. A quasi-experimental longitudinal design was used: the treatment group received a supplemental program in small groups of 3 to 5 students, for 30 minutes daily from November to June. The control group did not receive it. All students were assessed three times during the academic year. A hierarchical linear growth modeling was conducted and differences on growth rate were found in vocabulary in kindergarten (p < .001; variance explained = 77.0%), phonemic awareness in kindergarten (p < .001; variance explained = 43.7%) and first grade (p < .01; variance explained = 15.2%), and finally we also find significant growth differences for second grade in oral reading fluency (p < .05; variance explained = 15.1%) and retell task (p < .05; variance explained = 14.5%). Children at risk for reading disabilities in Spanish can improve their skills when they receive explicit instruction in the context of Response to Intervention (RtI). Findings are discussed for each skill in the context of implementing a Tier 2 small group intervention within an RtI approach. Implications for practice in the Spanish educational context are also discussed for children who are struggling with reading.
Estimation of Additive, Dominance, and Imprinting Genetic Variance Using Genomic Data
Lopes, Marcos S.; Bastiaansen, John W. M.; Janss, Luc; Knol, Egbert F.; Bovenhuis, Henk
2015-01-01
Traditionally, exploration of genetic variance in humans, plants, and livestock species has been limited mostly to the use of additive effects estimated using pedigree data. However, with the development of dense panels of single-nucleotide polymorphisms (SNPs), the exploration of genetic variation of complex traits is moving from quantifying the resemblance between family members to the dissection of genetic variation at individual loci. With SNPs, we were able to quantify the contribution of additive, dominance, and imprinting variance to the total genetic variance by using a SNP regression method. The method was validated in simulated data and applied to three traits (number of teats, backfat, and lifetime daily gain) in three purebred pig populations. In simulated data, the estimates of additive, dominance, and imprinting variance were very close to the simulated values. In real data, dominance effects account for a substantial proportion of the total genetic variance (up to 44%) for these traits in these populations. The contribution of imprinting to the total phenotypic variance of the evaluated traits was relatively small (1–3%). Our results indicate a strong relationship between additive variance explained per chromosome and chromosome length, which has been described previously for other traits in other species. We also show that a similar linear relationship exists for dominance and imprinting variance. These novel results improve our understanding of the genetic architecture of the evaluated traits and shows promise to apply the SNP regression method to other traits and species, including human diseases. PMID:26438289
Automatic Bayes Factors for Testing Equality- and Inequality-Constrained Hypotheses on Variances.
Böing-Messing, Florian; Mulder, Joris
2018-05-03
In comparing characteristics of independent populations, researchers frequently expect a certain structure of the population variances. These expectations can be formulated as hypotheses with equality and/or inequality constraints on the variances. In this article, we consider the Bayes factor for testing such (in)equality-constrained hypotheses on variances. Application of Bayes factors requires specification of a prior under every hypothesis to be tested. However, specifying subjective priors for variances based on prior information is a difficult task. We therefore consider so-called automatic or default Bayes factors. These methods avoid the need for the user to specify priors by using information from the sample data. We present three automatic Bayes factors for testing variances. The first is a Bayes factor with equal priors on all variances, where the priors are specified automatically using a small share of the information in the sample data. The second is the fractional Bayes factor, where a fraction of the likelihood is used for automatic prior specification. The third is an adjustment of the fractional Bayes factor such that the parsimony of inequality-constrained hypotheses is properly taken into account. The Bayes factors are evaluated by investigating different properties such as information consistency and large sample consistency. Based on this evaluation, it is concluded that the adjusted fractional Bayes factor is generally recommendable for testing equality- and inequality-constrained hypotheses on variances.
Turbulence characteristics of velocity and scalars in an internal boundary-layer above a lake
NASA Astrophysics Data System (ADS)
Sahlee, E.; Rutgersson, A.; Podgrajsek, E.
2012-12-01
We analyze turbulence measurements, including methane, from a small island in a Swedish lake. The turbulence structure was found to be highly influenced by the surrounding land during daytime. Variance spectra of both horizontal velocity and scalars during both unstable and stable stratification displayed a low frequency peak. The energy at lower frequencies displayed a daily variation, increasing in the morning and decreasing in the afternoon. We interpret this behavior as a sign of spectral lag, where the low frequency energy, large eddies, originate from the convective boundary layer above the surrounding land. When the air is advected over the lake the small eddies rapidly equilibrates with new surface forcing. However, the larger eddies remain for an appreciable distance and influence the turbulence in the developing lake boundary layer. The variance of the horizontal velocity is increased by these large eddies however, momentum fluxes and scalar variances and fluxes appear unaffected. The drag coefficient, Stanton number and Dalton number used to parameterize the momentum flux, heat flux and latent heat flux respectively all compare very well with parameterizations developed for open ocean conditions.
Risk management with substitution options: Valuing flexibility in small-scale energy systems
NASA Astrophysics Data System (ADS)
Knapp, Karl Eric
Several features of small-scale energy systems make them more easily adapted to a changing operating environment than large centralized designs. This flexibility is often manifested as the ability to substitute inputs. This research explores the value of this substitution flexibility and the marginal value of becoming a "little more flexible" in the context of real project investment in developing countries. The elasticity of substitution is proposed as a stylized measure of flexibility and a choice variable. A flexible alternative (elasticity > 0) can be thought of as holding a fixed-proportions "nflexible" asset plus a sequence of exchange options---the option to move to another feasible "recipe" each period. Substitutability derives value from following a contour of anticipated variations and from responding to new information. Substitutability value, a "cost savings option", increases with elasticity and price risk. However, the required premium to incrementally increase flexibility can in some cases decrease with an increase in risk. Variance is not always a measure of risk. Tools from stochastic dominance are newly applied to real options with convex payoffs to correct some misperceptions and clarify many common modeling situations that meet the criteria for increased variance to imply increased risk. The behavior of the cost savings option is explored subject to a stochastic input price process. At the point where costs are identical for all alternatives, the stochastic process for cost savings becomes deterministic, with savings directly proportional to elasticity of substitution and price variance. The option is also formulated as a derivative security via dynamic programming. The partial differential equation is solved for the special case of Cobb-Douglas (elasticity = 1) (also shown are linear (infinite elasticity), Leontief (elasticity = 0)). Risk aversion is insufficient to prefer a more flexible alternative with the same expected value. Intertemporal links convert the sequence of independent options to a single compound option and require an expansion of the flexibility concept. Additional options increase the value of the project but generally decrease flexibility value. The framework is applied to case study in India: an urban industry electricity strategy decision with reliability risk.
Landsat-TM identification of Amblyomma variegatum (Acari: Ixodidae) habitats in Guadeloupe
NASA Technical Reports Server (NTRS)
Hugh-Jones, M.; Barre, N.; Nelson, G.; Wehnes, K.; Warner, J.; Garvin, J.; Garris, G.
1992-01-01
The feasibility of identifying specific habitats of the African bont tick, Amblyomma variegatum, from Landsat-TM images was investigated by comparing remotely sensed images of visible farms in Grande Terre (Guadeloupe) with field observations made in the same period of time (1986-1987). The different tick habitates could be separated using principal component analysis. The analysis clustered the sites by large and small variance of band values, and by vegetation and moisture indexes. It was found that herds in heterogeneous sites with large variances had more ticks than those in homogeneous or low variance sites. Within the heterogeneous sites, those with high vegetation and moisture indexes had more ticks than those with low values.
Analytical approximations for effective relative permeability in the capillary limit
NASA Astrophysics Data System (ADS)
Rabinovich, Avinoam; Li, Boxiao; Durlofsky, Louis J.
2016-10-01
We present an analytical method for calculating two-phase effective relative permeability, krjeff, where j designates phase (here CO2 and water), under steady state and capillary-limit assumptions. These effective relative permeabilities may be applied in experimental settings and for upscaling in the context of numerical flow simulations, e.g., for CO2 storage. An exact solution for effective absolute permeability, keff, in two-dimensional log-normally distributed isotropic permeability (k) fields is the geometric mean. We show that this does not hold for krjeff since log normality is not maintained in the capillary-limit phase permeability field (Kj=k·krj) when capillary pressure, and thus the saturation field, is varied. Nevertheless, the geometric mean is still shown to be suitable for approximating krjeff when the variance of lnk is low. For high-variance cases, we apply a correction to the geometric average gas effective relative permeability using a Winsorized mean, which neglects large and small Kj values symmetrically. The analytical method is extended to anisotropically correlated log-normal permeability fields using power law averaging. In these cases, the Winsorized mean treatment is applied to the gas curves for cases described by negative power law exponents (flow across incomplete layers). The accuracy of our analytical expressions for krjeff is demonstrated through extensive numerical tests, using low-variance and high-variance permeability realizations with a range of correlation structures. We also present integral expressions for geometric-mean and power law average krjeff for the systems considered, which enable derivation of closed-form series solutions for krjeff without generating permeability realizations.
SMALL COLOUR VISION VARIATIONS AND THEIR EFFECT IN VISUAL COLORIMETRY,
COLOR VISION, PERFORMANCE(HUMAN), TEST EQUIPMENT, PERFORMANCE(HUMAN), CORRELATION TECHNIQUES, STATISTICAL PROCESSES, COLORS, ANALYSIS OF VARIANCE, AGING(MATERIALS), COLORIMETRY , BRIGHTNESS, ANOMALIES, PLASTICS, UNITED KINGDOM.
The relationship between observational scale and explained variance in benthic communities
Flood, Roger D.; Frisk, Michael G.; Garza, Corey D.; Lopez, Glenn R.; Maher, Nicole P.
2018-01-01
This study addresses the impact of spatial scale on explaining variance in benthic communities. In particular, the analysis estimated the fraction of community variation that occurred at a spatial scale smaller than the sampling interval (i.e., the geographic distance between samples). This estimate is important because it sets a limit on the amount of community variation that can be explained based on the spatial configuration of a study area and sampling design. Six benthic data sets were examined that consisted of faunal abundances, common environmental variables (water depth, grain size, and surficial percent cover), and sonar backscatter treated as a habitat proxy (categorical acoustic provinces). Redundancy analysis was coupled with spatial variograms generated by multiscale ordination to quantify the explained and residual variance at different spatial scales and within and between acoustic provinces. The amount of community variation below the sampling interval of the surveys (< 100 m) was estimated to be 36–59% of the total. Once adjusted for this small-scale variation, > 71% of the remaining variance was explained by the environmental and province variables. Furthermore, these variables effectively explained the spatial structure present in the infaunal community. Overall, no scale problems remained to compromise inferences, and unexplained infaunal community variation had no apparent spatial structure within the observational scale of the surveys (> 100 m), although small-scale gradients (< 100 m) below the observational scale may be present. PMID:29324746
Pearcy, Benjamin T D; McEvoy, Peter M; Roberts, Lynne D
2017-02-01
This study extends knowledge about the relationship of Internet Gaming Disorder (IGD) to other established mental disorders by exploring comorbidities with anxiety, depression, Attention Deficit Hyperactivity Disorder (ADHD), and obsessive compulsive disorder (OCD), and assessing whether IGD accounts for unique variance in distress and disability. An online survey was completed by a convenience sample that engages in Internet gaming (N = 404). Participants meeting criteria for IGD based on the Personal Internet Gaming Disorder Evaluation-9 (PIE-9) reported higher comorbidity with depression, OCD, ADHD, and anxiety compared with those who did not meet the IGD criteria. IGD explained a small proportion of unique variance in distress (1%) and disability (3%). IGD accounted for a larger proportion of unique variance in disability than anxiety and ADHD, and a similar proportion to depression. Replications with clinical samples using longitudinal designs and structured diagnostic interviews are required.
Greenbaum, Gili
2015-09-07
Evaluation of the time scale of the fixation of neutral mutations is crucial to the theoretical understanding of the role of neutral mutations in evolution. Diffusion approximations of the Wright-Fisher model are most often used to derive analytic formulations of genetic drift, as well as for the time scales of the fixation of neutral mutations. These approximations require a set of assumptions, most notably that genetic drift is a stochastic process in a continuous allele-frequency space, an assumption appropriate for large populations. Here equivalent approximations are derived using a coalescent theory approach which relies on a different set of assumptions than the diffusion approach, and adopts a discrete allele-frequency space. Solutions for the mean and variance of the time to fixation of a neutral mutation derived from the two approaches converge for large populations but slightly differ for small populations. A Markov chain analysis of the Wright-Fisher model for small populations is used to evaluate the solutions obtained, showing that both the mean and the variance are better approximated by the coalescent approach. The coalescence approximation represents a tighter upper-bound for the mean time to fixation than the diffusion approximation, while the diffusion approximation and coalescence approximation form an upper and lower bound, respectively, for the variance. The converging solutions and the small deviations of the two approaches strongly validate the use of diffusion approximations, but suggest that coalescent theory can provide more accurate approximations for small populations. Copyright © 2015 Elsevier Ltd. All rights reserved.
Unimodular lattice triangulations as small-world and scale-free random graphs
NASA Astrophysics Data System (ADS)
Krüger, B.; Schmidt, E. M.; Mecke, K.
2015-02-01
Real-world networks, e.g., the social relations or world-wide-web graphs, exhibit both small-world and scale-free behaviour. We interpret lattice triangulations as planar graphs by identifying triangulation vertices with graph nodes and one-dimensional simplices with edges. Since these triangulations are ergodic with respect to a certain Pachner flip, applying different Monte Carlo simulations enables us to calculate average properties of random triangulations, as well as canonical ensemble averages, using an energy functional that is approximately the variance of the degree distribution. All considered triangulations have clustering coefficients comparable with real-world graphs; for the canonical ensemble there are inverse temperatures with small shortest path length independent of system size. Tuning the inverse temperature to a quasi-critical value leads to an indication of scale-free behaviour for degrees k≥slant 5. Using triangulations as a random graph model can improve the understanding of real-world networks, especially if the actual distance of the embedded nodes becomes important.
Code of Federal Regulations, 2013 CFR
2013-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260.33 Section 260.33 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.33 Procedures for variances...
Code of Federal Regulations, 2014 CFR
2014-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260.33 Section 260.33 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.33 Procedures for variances...
Code of Federal Regulations, 2012 CFR
2012-07-01
... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260.33 Section 260.33 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.33 Procedures for variances...
Mnemonic function in small vessel disease and associations with white matter tract microstructure.
Metoki, Athanasia; Brookes, Rebecca L; Zeestraten, Eva; Lawrence, Andrew J; Morris, Robin G; Barrick, Thomas R; Markus, Hugh S; Charlton, Rebecca A
2017-09-01
Cerebral small vessel disease (SVD) is associated with deficits in working memory, with a relative sparing of long-term memory; function may be influenced by white matter microstructure. Working and long-term memory were examined in 106 patients with SVD and 35 healthy controls. Microstructure was measured in the uncinate fasciculi and cingula. Working memory was more impaired than long-term memory in SVD, but both abilities were reduced compared to controls. Regression analyses found that having SVD explained the variance in memory functions, with additional variance explained by the cingula (working memory) and uncinate (long-term memory). Performance can be explained in terms of integrity loss in specific white matter tract associated with mnemonic functions. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Yokoyama, Yoshie; Jelenkovic, Aline; Hur, Yoon-Mi; Sund, Reijo; Fagnani, Corrado; Stazi, Maria A; Brescianini, Sonia; Ji, Fuling; Ning, Feng; Pang, Zengchang; Knafo-Noam, Ariel; Mankuta, David; Abramson, Lior; Rebato, Esther; Hopper, John L; Cutler, Tessa L; Saudino, Kimberly J; Nelson, Tracy L; Whitfield, Keith E; Corley, Robin P; Huibregtse, Brooke M; Derom, Catherine A; Vlietinck, Robert F; Loos, Ruth J F; Llewellyn, Clare H; Fisher, Abigail; Bjerregaard-Andersen, Morten; Beck-Nielsen, Henning; Sodemann, Morten; Krueger, Robert F; McGue, Matt; Pahlen, Shandell; Bartels, Meike; van Beijsterveldt, Catharina E M; Willemsen, Gonneke; Harris, Jennifer R; Brandt, Ingunn; Nilsen, Thomas S; Craig, Jeffrey M; Saffery, Richard; Dubois, Lise; Boivin, Michel; Brendgen, Mara; Dionne, Ginette; Vitaro, Frank; Haworth, Claire M A; Plomin, Robert; Bayasgalan, Gombojav; Narandalai, Danshiitsoodol; Rasmussen, Finn; Tynelius, Per; Tarnoki, Adam D; Tarnoki, David L; Ooki, Syuichi; Rose, Richard J; Pietiläinen, Kirsi H; Sørensen, Thorkild I A; Boomsma, Dorret I; Kaprio, Jaakko; Silventoinen, Karri
2018-05-19
The genetic architecture of birth size may differ geographically and over time. We examined differences in the genetic and environmental contributions to birthweight, length and ponderal index (PI) across geographical-cultural regions (Europe, North America and Australia, and East Asia) and across birth cohorts, and how gestational age modifies these effects. Data from 26 twin cohorts in 16 countries including 57 613 monozygotic and dizygotic twin pairs were pooled. Genetic and environmental variations of birth size were estimated using genetic structural equation modelling. The variance of birthweight and length was predominantly explained by shared environmental factors, whereas the variance of PI was explained both by shared and unique environmental factors. Genetic variance contributing to birth size was small. Adjusting for gestational age decreased the proportions of shared environmental variance and increased the propositions of unique environmental variance. Genetic variance was similar in the geographical-cultural regions, but shared environmental variance was smaller in East Asia than in Europe and North America and Australia. The total variance and shared environmental variance of birth length and PI were greater from the birth cohort 1990-99 onwards compared with the birth cohorts from 1970-79 to 1980-89. The contribution of genetic factors to birth size is smaller than that of shared environmental factors, which is partly explained by gestational age. Shared environmental variances of birth length and PI were greater in the latest birth cohorts and differed also across geographical-cultural regions. Shared environmental factors are important when explaining differences in the variation of birth size globally and over time.
Applying the Expectancy-Value Model to understand health values.
Zhang, Xu-Hao; Xie, Feng; Wee, Hwee-Lin; Thumboo, Julian; Li, Shu-Chuen
2008-03-01
Expectancy-Value Model (EVM) is the most structured model in psychology to predict attitudes by measuring attitudinal attributes (AAs) and relevant external variables. Because health value could be categorized as attitude, we aimed to apply EVM to explore its usefulness in explaining variances in health values and investigate underlying factors. Focus group discussion was carried out to identify the most common and significant AAs toward 5 different health states (coded as 11111, 11121, 21221, 32323, and 33333 in EuroQol Five-Dimension (EQ-5D) descriptive system). AAs were measured in a sum of multiplications of subjective probability (expectancy) and perceived value of attributes with 7-point Likert scales. Health values were measured using visual analog scales (VAS, range 0-1). External variables (age, sex, ethnicity, education, housing, marital status, and concurrent chronic diseases) were also incorporated into survey questionnaire distributed by convenience sampling among eligible respondents. Univariate analyses were used to identify external variables causing significant differences in VAS. Multiple linear regression model (MLR) and hierarchical regression model were used to investigate the explanatory power of AAs and possible significant external variable(s) separately or in combination, for each individual health state and a mixed scenario of five states, respectively. Four AAs were identified, namely, "worsening your quality of life in terms of health" (WQoL), "adding a burden to your family" (BTF), "making you less independent" (MLI) and "unable to work or study" (UWS). Data were analyzed based on 232 respondents (mean [SD] age: 27.7 [15.07] years, 49.1% female). Health values varied significantly across 5 health states, ranging from 0.12 (33333) to 0.97 (11111). With no significant external variables identified, EVM explained up to 62% of the variances in health values across 5 health states. The explanatory power of 4 AAs were found to be between 13% and 28% in separate MLR models (P < 0.05). When data were analyzed for each health state, variances in health values became small and explanatory power of EVM was reduced to a range between 8% and 23%. EVM was useful in explaining variances of health values and predicting important factors. Its power to explain small variances might be restricted due to limitations of 7-point Likert scale to measure AAs accurately. With further improvement and validation of a compatible continuous scale for more accurate measurement, EVM is expected to explain health values to a larger extent.
Robust versus consistent variance estimators in marginal structural Cox models.
Enders, Dirk; Engel, Susanne; Linder, Roland; Pigeot, Iris
2018-06-11
In survival analyses, inverse-probability-of-treatment (IPT) and inverse-probability-of-censoring (IPC) weighted estimators of parameters in marginal structural Cox models are often used to estimate treatment effects in the presence of time-dependent confounding and censoring. In most applications, a robust variance estimator of the IPT and IPC weighted estimator is calculated leading to conservative confidence intervals. This estimator assumes that the weights are known rather than estimated from the data. Although a consistent estimator of the asymptotic variance of the IPT and IPC weighted estimator is generally available, applications and thus information on the performance of the consistent estimator are lacking. Reasons might be a cumbersome implementation in statistical software, which is further complicated by missing details on the variance formula. In this paper, we therefore provide a detailed derivation of the variance of the asymptotic distribution of the IPT and IPC weighted estimator and explicitly state the necessary terms to calculate a consistent estimator of this variance. We compare the performance of the robust and consistent variance estimators in an application based on routine health care data and in a simulation study. The simulation reveals no substantial differences between the 2 estimators in medium and large data sets with no unmeasured confounding, but the consistent variance estimator performs poorly in small samples or under unmeasured confounding, if the number of confounders is large. We thus conclude that the robust estimator is more appropriate for all practical purposes. Copyright © 2018 John Wiley & Sons, Ltd.
Hubbs-Tait, Laura; Kennedy, Tay Seacord; Droke, Elizabeth A; Belanger, David M; Parker, Jill R
2007-01-01
The objective of this study was to conduct a preliminary investigation of lead, zinc, and iron levels in relation to child cognition and behavior in a small sample of Head Start children. The design was cross-sectional and correlational. Participants were 42 3- to 5-year-old children attending rural Head Start centers. Nonfasting blood samples of whole blood lead, plasma zinc, and ferritin were collected. Teachers rated children's behavior on the California Preschool Social Competency Scale, Howes' Sociability subscale, and the Preschool Behavior Questionnaire. Children were tested individually with the McCarthy Scales of Children's Abilities. Hierarchical regression analyses revealed that zinc and ferritin jointly explained 25% of the variance in McCarthy Scales of Children's Abilities verbal scores. Lead levels explained 25% of the variance in teacher ratings of girls' sociability and 20% of the variance in teacher ratings of girls' classroom competence. Zinc levels explained 39% of the variance in teacher ratings of boys' anxiety. Univariate analysis of variance revealed that the four children low in zinc and iron had significantly higher blood lead (median=0.23 micromol/L [4.73 microg/dL]) than the 31 children sufficient in zinc or iron (median=0.07 micromol/L [1.54 microg/dL]) or the 7 children sufficient in both (median=0.12 micromol/L [2.52 microg/dL]), suggesting an interaction among the three minerals. Within this small low-income sample, the results imply both separate and interacting effects of iron, zinc, and lead. They underscore the importance of studying these three minerals in larger samples of low-income preschool children to make more definitive conclusions.
Lebigre, Christophe; Arcese, Peter; Reid, Jane M
2013-07-01
Age-specific variances and covariances in reproductive success shape the total variance in lifetime reproductive success (LRS), age-specific opportunities for selection, and population demographic variance and effective size. Age-specific (co)variances in reproductive success achieved through different reproductive routes must therefore be quantified to predict population, phenotypic and evolutionary dynamics in age-structured populations. While numerous studies have quantified age-specific variation in mean reproductive success, age-specific variances and covariances in reproductive success, and the contributions of different reproductive routes to these (co)variances, have not been comprehensively quantified in natural populations. We applied 'additive' and 'independent' methods of variance decomposition to complete data describing apparent (social) and realised (genetic) age-specific reproductive success across 11 cohorts of socially monogamous but genetically polygynandrous song sparrows (Melospiza melodia). We thereby quantified age-specific (co)variances in male within-pair and extra-pair reproductive success (WPRS and EPRS) and the contributions of these (co)variances to the total variances in age-specific reproductive success and LRS. 'Additive' decomposition showed that within-age and among-age (co)variances in WPRS across males aged 2-4 years contributed most to the total variance in LRS. Age-specific (co)variances in EPRS contributed relatively little. However, extra-pair reproduction altered age-specific variances in reproductive success relative to the social mating system, and hence altered the relative contributions of age-specific reproductive success to the total variance in LRS. 'Independent' decomposition showed that the (co)variances in age-specific WPRS, EPRS and total reproductive success, and the resulting opportunities for selection, varied substantially across males that survived to each age. Furthermore, extra-pair reproduction increased the variance in age-specific reproductive success relative to the social mating system to a degree that increased across successive age classes. This comprehensive decomposition of the total variances in age-specific reproductive success and LRS into age-specific (co)variances attributable to two reproductive routes showed that within-age and among-age covariances contributed substantially to the total variance and that extra-pair reproduction can alter the (co)variance structure of age-specific reproductive success. Such covariances and impacts should consequently be integrated into theoretical assessments of demographic and evolutionary processes in age-structured populations. © 2013 The Authors. Journal of Animal Ecology © 2013 British Ecological Society.
On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models
NASA Astrophysics Data System (ADS)
Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.
2017-12-01
Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.
Spectral decomposition of internal gravity wave sea surface height in global models
NASA Astrophysics Data System (ADS)
Savage, Anna C.; Arbic, Brian K.; Alford, Matthew H.; Ansong, Joseph K.; Farrar, J. Thomas; Menemenlis, Dimitris; O'Rourke, Amanda K.; Richman, James G.; Shriver, Jay F.; Voet, Gunnar; Wallcraft, Alan J.; Zamudio, Luis
2017-10-01
Two global ocean models ranging in horizontal resolution from 1/12° to 1/48° are used to study the space and time scales of sea surface height (SSH) signals associated with internal gravity waves (IGWs). Frequency-horizontal wavenumber SSH spectral densities are computed over seven regions of the world ocean from two simulations of the HYbrid Coordinate Ocean Model (HYCOM) and three simulations of the Massachusetts Institute of Technology general circulation model (MITgcm). High wavenumber, high-frequency SSH variance follows the predicted IGW linear dispersion curves. The realism of high-frequency motions (>0.87 cpd) in the models is tested through comparison of the frequency spectral density of dynamic height variance computed from the highest-resolution runs of each model (1/25° HYCOM and 1/48° MITgcm) with dynamic height variance frequency spectral density computed from nine in situ profiling instruments. These high-frequency motions are of particular interest because of their contributions to the small-scale SSH variability that will be observed on a global scale in the upcoming Surface Water and Ocean Topography (SWOT) satellite altimetry mission. The variance at supertidal frequencies can be comparable to the tidal and low-frequency variance for high wavenumbers (length scales smaller than ˜50 km), especially in the higher-resolution simulations. In the highest-resolution simulations, the high-frequency variance can be greater than the low-frequency variance at these scales.
Social capital and health-purely a question of context?
Giordano, Giuseppe Nicola; Ohlsson, Henrik; Lindström, Martin
2011-07-01
Debate still surrounds which level of analysis (individual vs. contextual) is most appropriate to investigate the effects of social capital on health. Applying multilevel ecometric analyses to British Household Panel Survey data, we estimated fixed and random effects between five individual-, household- and small area-level social capital indicators and general health. We further compared the variance in health attributable to each level using intraclass correlations. Our results demonstrate that association between social capital and health depends on indicator type and level investigated, with one quarter of total individual-level health variance found at the household level. However, individual-level social capital variables and other health determinants appear to influence contextual-level variance the most. Copyright © 2011 Elsevier Ltd. All rights reserved.
Continuous variation caused by genes with graduated effects.
Matthysse, S; Lange, K; Wagener, D K
1979-01-01
The classical polygenic theory of inheritance postulates a large number of genes with small, and essentially similar, effects. We propose instead a model with genes of gradually decreasing effects. The resulting phenotypic distribution is not normal; if the gene effects are geometrically decreasing, it can be triangular. The joint distribution of parent and offspring genic value is calculated. The most readily testable difference between the two models is that, in the decreasing-effect model, the variance of the offspring distribution from given parents depends on the parents' genic values. The more the parents deviate from the mean, the smaller the variance of the offspring should be. In the equal-effect model the offspring variance is independent of the parents' genic values. PMID:288073
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y.; Gunasekaran, Raghul; Ma, Xiaosong
2016-01-01
Inter-application I/O contention and performance interference have been recognized as severe problems. In this work, we demonstrate, through measurement from Titan (world s No. 3 supercomputer), that high I/O variance co-exists with the fact that individual storage units remain under-utilized for the majority of the time. This motivates us to propose AID, a system that performs automatic application I/O characterization and I/O-aware job scheduling. AID analyzes existing I/O traffic and batch job history logs, without any prior knowledge on applications or user/developer involvement. It identifies the small set of I/O-intensive candidates among all applications running on a supercomputer and subsequentlymore » mines their I/O patterns, using more detailed per-I/O-node traffic logs. Based on such auto- extracted information, AID provides online I/O-aware scheduling recommendations to steer I/O-intensive applications away from heavy ongoing I/O activities. We evaluate AID on Titan, using both real applications (with extracted I/O patterns validated by contacting users) and our own pseudo-applications. Our results confirm that AID is able to (1) identify I/O-intensive applications and their detailed I/O characteristics, and (2) significantly reduce these applications I/O performance degradation/variance by jointly evaluating out- standing applications I/O pattern and real-time system l/O load.« less
Parallel Computation of Flow in Heterogeneous Media Modelled by Mixed Finite Elements
NASA Astrophysics Data System (ADS)
Cliffe, K. A.; Graham, I. G.; Scheichl, R.; Stals, L.
2000-11-01
In this paper we describe a fast parallel method for solving highly ill-conditioned saddle-point systems arising from mixed finite element simulations of stochastic partial differential equations (PDEs) modelling flow in heterogeneous media. Each realisation of these stochastic PDEs requires the solution of the linear first-order velocity-pressure system comprising Darcy's law coupled with an incompressibility constraint. The chief difficulty is that the permeability may be highly variable, especially when the statistical model has a large variance and a small correlation length. For reasonable accuracy, the discretisation has to be extremely fine. We solve these problems by first reducing the saddle-point formulation to a symmetric positive definite (SPD) problem using a suitable basis for the space of divergence-free velocities. The reduced problem is solved using parallel conjugate gradients preconditioned with an algebraically determined additive Schwarz domain decomposition preconditioner. The result is a solver which exhibits a good degree of robustness with respect to the mesh size as well as to the variance and to physically relevant values of the correlation length of the underlying permeability field. Numerical experiments exhibit almost optimal levels of parallel efficiency. The domain decomposition solver (DOUG, http://www.maths.bath.ac.uk/~parsoft) used here not only is applicable to this problem but can be used to solve general unstructured finite element systems on a wide range of parallel architectures.
Quantitative PET Imaging in Drug Development: Estimation of Target Occupancy.
Naganawa, Mika; Gallezot, Jean-Dominique; Rossano, Samantha; Carson, Richard E
2017-12-11
Positron emission tomography, an imaging tool using radiolabeled tracers in humans and preclinical species, has been widely used in recent years in drug development, particularly in the central nervous system. One important goal of PET in drug development is assessing the occupancy of various molecular targets (e.g., receptors, transporters, enzymes) by exogenous drugs. The current linear mathematical approaches used to determine occupancy using PET imaging experiments are presented. These algorithms use results from multiple regions with different target content in two scans, a baseline (pre-drug) scan and a post-drug scan. New mathematical estimation approaches to determine target occupancy, using maximum likelihood, are presented. A major challenge in these methods is the proper definition of the covariance matrix of the regional binding measures, accounting for different variance of the individual regional measures and their nonzero covariance, factors that have been ignored by conventional methods. The novel methods are compared to standard methods using simulation and real human occupancy data. The simulation data showed the expected reduction in variance and bias using the proper maximum likelihood methods, when the assumptions of the estimation method matched those in simulation. Between-method differences for data from human occupancy studies were less obvious, in part due to small dataset sizes. These maximum likelihood methods form the basis for development of improved PET covariance models, in order to minimize bias and variance in PET occupancy studies.
2017-01-01
Background Parasites are essential components of natural communities, but the factors that generate skewed distributions of parasite occurrences and abundances across host populations are not well understood. Methods Here, we analyse at a seascape scale the spatiotemporal relationships of parasite exposure and host body-size with the proportion of infected hosts (i.e., prevalence) and aggregation of parasite burden across ca. 150 km of the coast and over 22 months. We predicted that the effects of parasite exposure on prevalence and aggregation are dependent on host body-sizes. We used an indirect host-parasite interaction in which migratory seagulls, sandy-shore molecrabs, and an acanthocephalan worm constitute the definitive hosts, intermediate hosts, and endoparasite, respectively. In such complex systems, increments in the abundance of definitive hosts imply increments in intermediate hosts’ exposure to the parasite’s dispersive stages. Results Linear mixed-effects models showed a significant, albeit highly variable, positive relationship between seagull density and prevalence. This relationship was stronger for small (cephalothorax length >15 mm) than large molecrabs (<15 mm). Independently of seagull density, large molecrabs carried significantly more parasites than small molecrabs. The analysis of the variance-to-mean ratio of per capita parasite burden showed no relationship between seagull density and mean parasite aggregation across host populations. However, the amount of unexplained variability in aggregation was strikingly higher in larger than smaller intermediate hosts. This unexplained variability was driven by a decrease in the mean-variance scaling in heavily infected large molecrabs. Conclusions These results show complex interdependencies between extrinsic and intrinsic population attributes on the structure of host-parasite interactions. We suggest that parasite accumulation—a characteristic of indirect host-parasite interactions—and subsequent increasing mortality rates over ontogeny underpin size-dependent host-parasite dynamics. PMID:28828270
Rodríguez, Sara M; Valdivia, Nelson
2017-01-01
Parasites are essential components of natural communities, but the factors that generate skewed distributions of parasite occurrences and abundances across host populations are not well understood. Here, we analyse at a seascape scale the spatiotemporal relationships of parasite exposure and host body-size with the proportion of infected hosts (i.e., prevalence) and aggregation of parasite burden across ca. 150 km of the coast and over 22 months. We predicted that the effects of parasite exposure on prevalence and aggregation are dependent on host body-sizes. We used an indirect host-parasite interaction in which migratory seagulls, sandy-shore molecrabs, and an acanthocephalan worm constitute the definitive hosts, intermediate hosts, and endoparasite, respectively. In such complex systems, increments in the abundance of definitive hosts imply increments in intermediate hosts' exposure to the parasite's dispersive stages. Linear mixed-effects models showed a significant, albeit highly variable, positive relationship between seagull density and prevalence. This relationship was stronger for small (cephalothorax length >15 mm) than large molecrabs (<15 mm). Independently of seagull density, large molecrabs carried significantly more parasites than small molecrabs. The analysis of the variance-to-mean ratio of per capita parasite burden showed no relationship between seagull density and mean parasite aggregation across host populations. However, the amount of unexplained variability in aggregation was strikingly higher in larger than smaller intermediate hosts. This unexplained variability was driven by a decrease in the mean-variance scaling in heavily infected large molecrabs. These results show complex interdependencies between extrinsic and intrinsic population attributes on the structure of host-parasite interactions. We suggest that parasite accumulation-a characteristic of indirect host-parasite interactions-and subsequent increasing mortality rates over ontogeny underpin size-dependent host-parasite dynamics.
NASA Astrophysics Data System (ADS)
Shore, R. M.; Freeman, M. P.; Gjerloev, J. W.
2017-12-01
We apply the meteorological analysis method of Empirical Orthogonal Functions (EOF) to ground magnetometer measurements, and subsequently use graph theory to classify the results. The EOF method is used to characterise and separate contributions to the variability of the Earth's external magnetic field (EMF) in the northern polar region. EOFs decompose the noisy EMF data into a small number of independent spatio-temporal basis functions, which collectively describe the majority of the magnetic field variance. We use these basis functions (computed monthly) to infill where data are missing, providing a self-consistent description of the EMF at 5-minute resolution spanning 1997-2009 (solar cycle 23). The EOF basis functions are calculated independently for each of the 144 months (i.e. 1997-2009) analysed. Since (by definition) the basis vectors are ranked by their contribution to the total variance, their rank will change from month to month. We use graph theory to find clusters of quantifiably-similar spatial basis functions, and thereby track similar patterns throughout the span of 144 months. We find that the discovered clusters can be associated with well-known individual Disturbance Polar (DP)-type equivalent current systems (e.g. DP2, DP1, DPY, NBZ), or with the motion of these systems. Via this method, we thus describe the varying behaviour of these current systems over solar cycle 23. We present their seasonal and solar cycle variations and examine the response of each current system to solar wind driving.
An Evolutionary Perspective on Epistasis and the Missing Heritability
Hemani, Gibran; Knott, Sara; Haley, Chris
2013-01-01
The relative importance between additive and non-additive genetic variance has been widely argued in quantitative genetics. By approaching this question from an evolutionary perspective we show that, while additive variance can be maintained under selection at a low level for some patterns of epistasis, the majority of the genetic variance that will persist is actually non-additive. We propose that one reason that the problem of the “missing heritability” arises is because the additive genetic variation that is estimated to be contributing to the variance of a trait will most likely be an artefact of the non-additive variance that can be maintained over evolutionary time. In addition, it can be shown that even a small reduction in linkage disequilibrium between causal variants and observed SNPs rapidly erodes estimates of epistatic variance, leading to an inflation in the perceived importance of additive effects. We demonstrate that the perception of independent additive effects comprising the majority of the genetic architecture of complex traits is biased upwards and that the search for causal variants in complex traits under selection is potentially underpowered by parameterising for additive effects alone. Given dense SNP panels the detection of causal variants through genome-wide association studies may be improved by searching for epistatic effects explicitly. PMID:23509438
Quantum resonances in a single plaquette of Josephson junctions: excitations of Rabi oscillations
NASA Astrophysics Data System (ADS)
Fistul, M. V.
2002-03-01
We present a theoretical study of a quantum regime of the resistive (whirling) state of dc driven anisotropic single plaquette containing small Josephson junctions. The current-voltage characteristics of such systems display resonant steps that are due to the resonant interaction between the time dependent Josephson current and the excited electromagnetic oscillations (EOs). The voltage positions of the resonances are determined by the quantum interband transitions of EOs. We show that in the quantum regime as the system is driven on the resonance, coherent Rabi oscillations between the quantum levels of EOs occur. At variance with the classical regime the magnitude and the width of resonances are determined by the frequency of Rabi oscillations that in turn, depends in a peculiar manner on an externally applied magnetic field and the parameters of the system.
Fuel cell stack monitoring and system control
Keskula, Donald H.; Doan, Tien M.; Clingerman, Bruce J.
2004-02-17
A control method for monitoring a fuel cell stack in a fuel cell system in which the actual voltage and actual current from the fuel cell stack are monitored. A preestablished relationship between voltage and current over the operating range of the fuel cell is established. A variance value between the actual measured voltage and the expected voltage magnitude for a given actual measured current is calculated and compared with a predetermined allowable variance. An output is generated if the calculated variance value exceeds the predetermined variance. The predetermined voltage-current for the fuel cell is symbolized as a polarization curve at given operating conditions of the fuel cell.
Application of Non-Equilibrium Thermo Field Dynamics to quantum teleportation under the environment
NASA Astrophysics Data System (ADS)
Kitajima, S.; Arimitsu, T.; Obinata, M.; Yoshida, K.
2014-06-01
Quantum teleportation for continuous variables is treated by Non-Equilibrium Thermo Field Dynamics (NETFD), a canonical operator formalism for dissipative quantum systems, in order to study the effect of imperfect quantum entanglement on quantum communication. We used an entangled state constructed by two squeezed states. The entangled state is imperfect due to two reasons, i.e., one is the finiteness of the squeezing parameter r and the other comes from the process that the squeezed states are created under the dissipative interaction with the environment. We derive the expressions for one-shot fidelity (OSF), probability density function (PDF) associated with OSF and (averaged) fidelity by making full use of the algebraic manipulation of operator algebra within NETFD. We found that OSF and PDF are given by Gaussian forms with its peak at the original information α to be teleported, and that for r≫1 the variances of these quantities blow up to infinity for κ/χ≤1, while they approach to finite values for κ/χ>1. Here, χ represents the intensity of a degenerate parametric process, and κ the relaxation rate due to the interaction with the environment. The blow-up of the variances for OSF and PDF guarantees higher security against eavesdropping. With the blow-up of the variances, the height of PDF reduces to small because of the normalization of probability, while the height of OSF approaches to 1 indicating a higher performance of the quantum teleportation. We also found that in the limit κ/χ≫1 the variances of both OSF and PDF for any value of r (>0) reduce to 1 which is the same value as the case r=0, i.e., no entanglement.
ERIC Educational Resources Information Center
Gray, G. Susan; Grajko, Philip F.
Responses from 230 New York State school districts were analyzed to determine the impact of the new State handicapped regulations with regard to financial impact, meeting the 30-day time period between initial referral of a handicapped child and board action, variances, and programming and placement according to 4 criteria. In general, small,…
NASA Astrophysics Data System (ADS)
Castanier, Eric; Paterne, Loic; Louis, Céline
2017-09-01
In the nuclear engineering, you have to manage time and precision. Especially in shielding design, you have to be more accurate and efficient to reduce cost (shielding thickness optimization), and for this, you use 3D codes. In this paper, we want to see if we can easily applicate the CADIS methods for design shielding of small pipes which go through large concrete walls. We assess the impact of the WW generated by the 3D-deterministic code ATTILA versus WW directly generated by MCNP (iterative and manual process). The comparison is based on the quality of the convergence (estimated relative error (σ), Variance of Variance (VOV) and Figure of Merit (FOM)), on time (computer time + modelling) and on the implement for the engineer.
Moghaddar, N; van der Werf, J H J
2017-12-01
The objectives of this study were to estimate the additive and dominance variance component of several weight and ultrasound scanned body composition traits in purebred and combined cross-bred sheep populations based on single nucleotide polymorphism (SNP) marker genotypes and then to investigate the effect of fitting additive and dominance effects on accuracy of genomic evaluation. Additive and dominance variance components were estimated in a mixed model equation based on "average information restricted maximum likelihood" using additive and dominance (co)variances between animals calculated from 48,599 SNP marker genotypes. Genomic prediction was based on genomic best linear unbiased prediction (GBLUP), and the accuracy of prediction was assessed based on a random 10-fold cross-validation. Across different weight and scanned body composition traits, dominance variance ranged from 0.0% to 7.3% of the phenotypic variance in the purebred population and from 7.1% to 19.2% in the combined cross-bred population. In the combined cross-bred population, the range of dominance variance decreased to 3.1% and 9.9% after accounting for heterosis effects. Accounting for dominance effects significantly improved the likelihood of the fitting model in the combined cross-bred population. This study showed a substantial dominance genetic variance for weight and ultrasound scanned body composition traits particularly in cross-bred population; however, improvement in the accuracy of genomic breeding values was small and statistically not significant. Dominance variance estimates in combined cross-bred population could be overestimated if heterosis is not fitted in the model. © 2017 Blackwell Verlag GmbH.
Vowel category dependence of the relationship between palate height, tongue height, and oral area.
Hasegawa-Johnson, Mark; Pizza, Shamala; Alwan, Abeer; Cha, Jul Setsu; Haker, Katherine
2003-06-01
This article evaluates intertalker variance of oral area, logarithm of the oral area, tongue height, and formant frequencies as a function of vowel category. The data consist of coronal magnetic resonance imaging (MRI) sequences and acoustic recordings of 5 talkers, each producing 11 different vowels. Tongue height (left, right, and midsagittal), palate height, and oral area were measured in 3 coronal sections anterior to the oropharyngeal bend and were subjected to multivariate analysis of variance, variance ratio analysis, and regression analysis. The primary finding of this article is that oral area (between palate and tongue) showed less intertalker variance during production of vowels with an oral place of articulation (palatal and velar vowels) than during production of vowels with a uvular or pharyngeal place of articulation. Although oral area variance is place dependent, percentage variance (log area variance) is not place dependent. Midsagittal tongue height in the molar region was positively correlated with palate height during production of palatal vowels, but not during production of nonpalatal vowels. Taken together, these results suggest that small oral areas are characterized by relatively talker-independent vowel targets and that meeting these talker-independent targets is important enough that each talker adjusts his or her own tongue height to compensate for talker-dependent differences in constriction anatomy. Computer simulation results are presented to demonstrate that these results may be explained by an acoustic control strategy: When talkers with very different anatomical characteristics try to match talker-independent formant targets, the resulting area variances are minimized near the primary vocal tract constriction.
Statistical aspects of quantitative real-time PCR experiment design.
Kitchen, Robert R; Kubista, Mikael; Tichopad, Ales
2010-04-01
Experiments using quantitative real-time PCR to test hypotheses are limited by technical and biological variability; we seek to minimise sources of confounding variability through optimum use of biological and technical replicates. The quality of an experiment design is commonly assessed by calculating its prospective power. Such calculations rely on knowledge of the expected variances of the measurements of each group of samples and the magnitude of the treatment effect; the estimation of which is often uninformed and unreliable. Here we introduce a method that exploits a small pilot study to estimate the biological and technical variances in order to improve the design of a subsequent large experiment. We measure the variance contributions at several 'levels' of the experiment design and provide a means of using this information to predict both the total variance and the prospective power of the assay. A validation of the method is provided through a variance analysis of representative genes in several bovine tissue-types. We also discuss the effect of normalisation to a reference gene in terms of the measured variance components of the gene of interest. Finally, we describe a software implementation of these methods, powerNest, that gives the user the opportunity to input data from a pilot study and interactively modify the design of the assay. The software automatically calculates expected variances, statistical power, and optimal design of the larger experiment. powerNest enables the researcher to minimise the total confounding variance and maximise prospective power for a specified maximum cost for the large study. Copyright 2010 Elsevier Inc. All rights reserved.
Identifying students with dyslexia in higher education.
Tops, Wim; Callens, Maaike; Lammertyn, Jan; Van Hees, Valérie; Brysbaert, Marc
2012-10-01
An increasing number of students with dyslexia enter higher education. As a result, there is a growing need for standardized diagnosis. Previous research has suggested that a small number of tests may suffice to reliably assess students with dyslexia, but these studies were based on post hoc discriminant analysis, which tends to overestimate the percentage of systematic variance, and were limited to the English language (and the Anglo-Saxon education system). Therefore, we repeated the research in a non-English language (Dutch) and we selected variables on the basis of a prediction analysis. The results of our study confirm that it is not necessary to administer a wide range of tests to diagnose dyslexia in (young) adults. Three tests sufficed: word reading, word spelling and phonological awareness, in line with the proposal that higher education students with dyslexia continue to have specific problems with reading and writing. We also show that a traditional postdiction analysis selects more variables of importance than the prediction analysis. However, these extra variables explain study-specific variance and do not result in more predictive power of the model.
NASA Astrophysics Data System (ADS)
Kudryavtsev, O.; Rodochenko, V.
2018-03-01
We propose a new general numerical method aimed to solve integro-differential equations with variable coefficients. The problem under consideration arises in finance where in the context of pricing barrier options in a wide class of stochastic volatility models with jumps. To handle the effect of the correlation between the price and the variance, we use a suitable substitution for processes. Then we construct a Markov-chain approximation for the variation process on small time intervals and apply a maturity randomization technique. The result is a system of boundary problems for integro-differential equations with constant coefficients on the line in each vertex of the chain. We solve the arising problems using a numerical Wiener-Hopf factorization method. The approximate formulae for the factors are efficiently implemented by means of the Fast Fourier Transform. Finally, we use a recurrent procedure that moves backwards in time on the variance tree. We demonstrate the convergence of the method using Monte-Carlo simulations and compare our results with the results obtained by the Wiener-Hopf method with closed-form expressions of the factors.
Simple display system of mechanical properties of cells and their dispersion.
Shimizu, Yuji; Kihara, Takanori; Haghparast, Seyed Mohammad Ali; Yuba, Shunsuke; Miyake, Jun
2012-01-01
The mechanical properties of cells are unique indicators of their states and functions. Though, it is difficult to recognize the degrees of mechanical properties, due to small size of the cell and broad distribution of the mechanical properties. Here, we developed a simple virtual reality system for presenting the mechanical properties of cells and their dispersion using a haptic device and a PC. This system simulates atomic force microscopy (AFM) nanoindentation experiments for floating cells in virtual environments. An operator can virtually position the AFM spherical probe over a round cell with the haptic handle on the PC monitor and feel the force interaction. The Young's modulus of mesenchymal stem cells and HEK293 cells in the floating state was measured by AFM. The distribution of the Young's modulus of these cells was broad, and the distribution complied with a log-normal pattern. To represent the mechanical properties together with the cell variance, we used log-normal distribution-dependent random number determined by the mode and variance values of the Young's modulus of these cells. The represented Young's modulus was determined for each touching event of the probe surface and the cell object, and the haptic device-generating force was calculated using a Hertz model corresponding to the indentation depth and the fixed Young's modulus value. Using this system, we can feel the mechanical properties and their dispersion in each cell type in real time. This system will help us not only recognize the degrees of mechanical properties of diverse cells but also share them with others.
Simple Display System of Mechanical Properties of Cells and Their Dispersion
Shimizu, Yuji; Kihara, Takanori; Haghparast, Seyed Mohammad Ali; Yuba, Shunsuke; Miyake, Jun
2012-01-01
The mechanical properties of cells are unique indicators of their states and functions. Though, it is difficult to recognize the degrees of mechanical properties, due to small size of the cell and broad distribution of the mechanical properties. Here, we developed a simple virtual reality system for presenting the mechanical properties of cells and their dispersion using a haptic device and a PC. This system simulates atomic force microscopy (AFM) nanoindentation experiments for floating cells in virtual environments. An operator can virtually position the AFM spherical probe over a round cell with the haptic handle on the PC monitor and feel the force interaction. The Young's modulus of mesenchymal stem cells and HEK293 cells in the floating state was measured by AFM. The distribution of the Young's modulus of these cells was broad, and the distribution complied with a log-normal pattern. To represent the mechanical properties together with the cell variance, we used log-normal distribution-dependent random number determined by the mode and variance values of the Young's modulus of these cells. The represented Young's modulus was determined for each touching event of the probe surface and the cell object, and the haptic device-generating force was calculated using a Hertz model corresponding to the indentation depth and the fixed Young's modulus value. Using this system, we can feel the mechanical properties and their dispersion in each cell type in real time. This system will help us not only recognize the degrees of mechanical properties of diverse cells but also share them with others. PMID:22479595
Space-Time Smoothing of Complex Survey Data: Small Area Estimation for Child Mortality.
Mercer, Laina D; Wakefield, Jon; Pantazis, Athena; Lutambi, Angelina M; Masanja, Honorati; Clark, Samuel
2015-12-01
Many people living in low and middle-income countries are not covered by civil registration and vital statistics systems. Consequently, a wide variety of other types of data including many household sample surveys are used to estimate health and population indicators. In this paper we combine data from sample surveys and demographic surveillance systems to produce small area estimates of child mortality through time. Small area estimates are necessary to understand geographical heterogeneity in health indicators when full-coverage vital statistics are not available. For this endeavor spatio-temporal smoothing is beneficial to alleviate problems of data sparsity. The use of conventional hierarchical models requires careful thought since the survey weights may need to be considered to alleviate bias due to non-random sampling and non-response. The application that motivated this work is estimation of child mortality rates in five-year time intervals in regions of Tanzania. Data come from Demographic and Health Surveys conducted over the period 1991-2010 and two demographic surveillance system sites. We derive a variance estimator of under five years child mortality that accounts for the complex survey weighting. For our application, the hierarchical models we consider include random effects for area, time and survey and we compare models using a variety of measures including the conditional predictive ordinate (CPO). The method we propose is implemented via the fast and accurate integrated nested Laplace approximation (INLA).
Surfzone vorticity in the presence of extreme bathymetric variability
NASA Astrophysics Data System (ADS)
Clark, D.; Elgar, S.; Raubenheimer, B.
2014-12-01
Surfzone vorticity was measured at Duck, NC using a novel 5-m diameter vorticity sensor deployed in 1.75 m water depth. During the 4-week deployment the initially alongshore uniform bathymetry developed 200-m long mega-cusps with alongshore vertical changes of 1.5 m or more. When waves were small and the vorticity sensor was seaward of the surfzone, vorticity variance and mean vorticity varied with the tidally modulated water depth, consistent with a net seaward flux of surfzone-generated vorticity. Vorticity variance increased with incident wave heights up to 2-m. However, vorticity variance remained relatively constant for incident wave heights above 2-m, and suggests that eddy energy may become saturated in the inner surfzone during large wave events. In the presence of mega-cusps the mean vorticity (shear) is often large and generated by bathymetrically controlled rip currents, while vorticity variance remains strongly correlated with the incident wave height. Funded by NSF, ASD(R&E), and WHOI Coastal Ocean Institute.
The variance of the locally measured Hubble parameter explained with different estimators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Odderskov, Io; Hannestad, Steen; Brandbyge, Jacob, E-mail: isho07@phys.au.dk, E-mail: sth@phys.au.dk, E-mail: jacobb@phys.au.dk
We study the expected variance of measurements of the Hubble constant, H {sub 0}, as calculated in either linear perturbation theory or using non-linear velocity power spectra derived from N -body simulations. We compare the variance with that obtained by carrying out mock observations in the N-body simulations, and show that the estimator typically used for the local Hubble constant in studies based on perturbation theory is different from the one used in studies based on N-body simulations. The latter gives larger weight to distant sources, which explains why studies based on N-body simulations tend to obtain a smaller variancemore » than that found from studies based on the power spectrum. Although both approaches result in a variance too small to explain the discrepancy between the value of H {sub 0} from CMB measurements and the value measured in the local universe, these considerations are important in light of the percent determination of the Hubble constant in the local universe.« less
Overlap between treatment and control distributions as an effect size measure in experiments.
Hedges, Larry V; Olkin, Ingram
2016-03-01
The proportion π of treatment group observations that exceed the control group mean has been proposed as an effect size measure for experiments that randomly assign independent units into 2 groups. We give the exact distribution of a simple estimator of π based on the standardized mean difference and use it to study the small sample bias of this estimator. We also give the minimum variance unbiased estimator of π under 2 models, one in which the variance of the mean difference is known and one in which the variance is unknown. We show how to use the relation between the standardized mean difference and the overlap measure to compute confidence intervals for π and show that these results can be used to obtain unbiased estimators, large sample variances, and confidence intervals for 3 related effect size measures based on the overlap. Finally, we show how the effect size π can be used in a meta-analysis. (c) 2016 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Reis, D. S.; Stedinger, J. R.; Martins, E. S.
2005-10-01
This paper develops a Bayesian approach to analysis of a generalized least squares (GLS) regression model for regional analyses of hydrologic data. The new approach allows computation of the posterior distributions of the parameters and the model error variance using a quasi-analytic approach. Two regional skew estimation studies illustrate the value of the Bayesian GLS approach for regional statistical analysis of a shape parameter and demonstrate that regional skew models can be relatively precise with effective record lengths in excess of 60 years. With Bayesian GLS the marginal posterior distribution of the model error variance and the corresponding mean and variance of the parameters can be computed directly, thereby providing a simple but important extension of the regional GLS regression procedures popularized by Tasker and Stedinger (1989), which is sensitive to the likely values of the model error variance when it is small relative to the sampling error in the at-site estimator.
Fukayama, Osamu; Taniguchi, Noriyuki; Suzuki, Takafumi; Mabuchi, Kunihiko
2008-01-01
An online brain-machine interface (BMI) in the form of a small vehicle, the 'RatCar,' has been developed. A rat had neural electrodes implanted in its primary motor cortex and basal ganglia regions to continuously record neural signals. Then, a linear state space model represents a correlation between the recorded neural signals and locomotion states (i.e., moving velocity and azimuthal variances) of the rat. The model parameters were set so as to minimize estimation errors, and the locomotion states were estimated from neural firing rates using a Kalman filter algorithm. The results showed a small oscillation to achieve smooth control of the vehicle in spite of fluctuating firing rates with noises applied to the model. Major variation of the model variables converged in a first 30 seconds of the experiments and lasted for the entire one hour session.
The q-dependent detrended cross-correlation analysis of stock market
NASA Astrophysics Data System (ADS)
Zhao, Longfeng; Li, Wei; Fenu, Andrea; Podobnik, Boris; Wang, Yougui; Stanley, H. Eugene
2018-02-01
Properties of the q-dependent cross-correlation matrices of the stock market have been analyzed by using random matrix theory and complex networks. The correlation structures of the fluctuations at different magnitudes have unique properties. The cross-correlations among small fluctuations are much stronger than those among large fluctuations. The large and small fluctuations are dominated by different groups of stocks. We use complex network representation to study these q-dependent matrices and discover some new identities. By utilizing those q-dependent correlation-based networks, we are able to construct some portfolios of those more independent stocks which consistently perform better. The optimal multifractal order for portfolio optimization is around q = 2 under the mean-variance portfolio framework, and q\\in[2, 6] under the expected shortfall criterion. These results have deepened our understanding regarding the collective behavior of the complex financial system.
The synchronicity between the stock and the stock index via information in market
NASA Astrophysics Data System (ADS)
Gao, Hai-Ling; Li, Jiang-Cheng; Guo, Wei; Mei, Dong-Cheng
2018-02-01
The synchronicity between the stock and the stock-index in a market system is investigated. The results show that: (i) the synchronicity between the stock and the stock-index increases with the rising degree of market information capitalized into stock prices in certain range; (ii) the synchronicity decreases for large firm-specific information; (iii) the stock return synchronicity is small compared to the big noise trading, however the variance noise facilitates the synchronization within the tailored realms. These findings may be helpful in understanding the effect of market information on synchronicity, especially for the response of firm-specific information and noise trading to synchronicity.
Noise and drift analysis of non-equally spaced timing data
NASA Technical Reports Server (NTRS)
Vernotte, F.; Zalamansky, G.; Lantz, E.
1994-01-01
Generally, it is possible to obtain equally spaced timing data from oscillators. The measurement of the drifts and noises affecting oscillators is then performed by using a variance (Allan variance, modified Allan variance, or time variance) or a system of several variances (multivariance method). However, in some cases, several samples, or even several sets of samples, are missing. In the case of millisecond pulsar timing data, for instance, observations are quite irregularly spaced in time. Nevertheless, since some observations are very close together (one minute) and since the timing data sequence is very long (more than ten years), information on both short-term and long-term stability is available. Unfortunately, a direct variance analysis is not possible without interpolating missing data. Different interpolation algorithms (linear interpolation, cubic spline) are used to calculate variances in order to verify that they neither lose information nor add erroneous information. A comparison of the results of the different algorithms is given. Finally, the multivariance method was adapted to the measurement sequence of the millisecond pulsar timing data: the responses of each variance of the system are calculated for each type of noise and drift, with the same missing samples as in the pulsar timing sequence. An estimation of precision, dynamics, and separability of this method is given.
The Conduct System and Its Infuence on Student Learning
ERIC Educational Resources Information Center
Stimpson, Matthew T.; Janosik, Steven M.
2015-01-01
In this study, 7 items were used to define a composite variable that measures the perceived effectiveness of student conduct systems. Multivariate Analysis of Variance (MANOVA) was used to test the relationship between perceived level of system effectiveness and self-reported student learning. In the analyses, 49% of the variance in reported…
Brown, David M; Juarez, Juan C; Brown, Andrea M
2013-12-01
A laser differential image-motion monitor (DIMM) system was designed and constructed as part of a turbulence characterization suite during the DARPA free-space optical experimental network experiment (FOENEX) program. The developed link measurement system measures the atmospheric coherence length (r0), atmospheric scintillation, and power in the bucket for the 1550 nm band. DIMM measurements are made with two separate apertures coupled to a single InGaAs camera. The angle of arrival (AoA) for the wavefront at each aperture can be calculated based on focal spot movements imaged by the camera. By utilizing a single camera for the simultaneous measurement of the focal spots, the correlation of the variance in the AoA allows a straightforward computation of r0 as in traditional DIMM systems. Standard measurements of scintillation and power in the bucket are made with the same apertures by redirecting a percentage of the incoming signals to InGaAs detectors integrated with logarithmic amplifiers for high sensitivity and high dynamic range. By leveraging two, small apertures, the instrument forms a small size and weight configuration for mounting to actively tracking laser communication terminals for characterizing link performance.
Promiscuous mating in the harem-roosting fruit bat, Cynopterus sphinx.
Garg, Kritika M; Chattopadhyay, Balaji; Doss D, Paramanatha Swami; A K, Vinoth Kumar; Kandula, Sripathi; Ramakrishnan, Uma
2012-08-01
Observations on mating behaviours and strategies guide our understanding of mating systems and variance in reproductive success. However, the presence of cryptic strategies often results in situations where social mating system is not reflective of genetic mating system. We present such a study of the genetic mating system of a harem-forming bat Cynopterus sphinx where harems may not be true indicators of male reproductive success. This temporal study using data from six seasons on paternity reveals that social harem assemblages do not play a role in the mating system, and variance in male reproductive success is lower than expected assuming polygynous mating. Further, simulations reveal that the genetic mating system is statistically indistinguishable from promiscuity. Our results are in contrast to an earlier study that demonstrated high variance in male reproductive success. Although an outcome of behavioural mating patterns, standardized variance in male reproductive success (I(m)) affects the opportunity for sexual selection. To gain a better understanding of the evolutionary implications of promiscuity for mammals in general, we compared our estimates of I(m) and total opportunity for sexual selection (I(m) /I(f), where I(f) is standardized variance in female reproductive success) with those of other known promiscuous species. We observed a broad range of I(m) /I(f) values across known promiscuous species, indicating our poor understanding of the evolutionary implications of promiscuous mating. © 2012 Blackwell Publishing Ltd.
Ensemble Kalman filters for dynamical systems with unresolved turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grooms, Ian, E-mail: grooms@cims.nyu.edu; Lee, Yoonsang; Majda, Andrew J.
Ensemble Kalman filters are developed for turbulent dynamical systems where the forecast model does not resolve all the active scales of motion. Coarse-resolution models are intended to predict the large-scale part of the true dynamics, but observations invariably include contributions from both the resolved large scales and the unresolved small scales. The error due to the contribution of unresolved scales to the observations, called ‘representation’ or ‘representativeness’ error, is often included as part of the observation error, in addition to the raw measurement error, when estimating the large-scale part of the system. It is here shown how stochastic superparameterization (amore » multiscale method for subgridscale parameterization) can be used to provide estimates of the statistics of the unresolved scales. In addition, a new framework is developed wherein small-scale statistics can be used to estimate both the resolved and unresolved components of the solution. The one-dimensional test problem from dispersive wave turbulence used here is computationally tractable yet is particularly difficult for filtering because of the non-Gaussian extreme event statistics and substantial small scale turbulence: a shallow energy spectrum proportional to k{sup −5/6} (where k is the wavenumber) results in two-thirds of the climatological variance being carried by the unresolved small scales. Because the unresolved scales contain so much energy, filters that ignore the representation error fail utterly to provide meaningful estimates of the system state. Inclusion of a time-independent climatological estimate of the representation error in a standard framework leads to inaccurate estimates of the large-scale part of the signal; accurate estimates of the large scales are only achieved by using stochastic superparameterization to provide evolving, large-scale dependent predictions of the small-scale statistics. Again, because the unresolved scales contain so much energy, even an accurate estimate of the large-scale part of the system does not provide an accurate estimate of the true state. By providing simultaneous estimates of both the large- and small-scale parts of the solution, the new framework is able to provide accurate estimates of the true system state.« less
TM and Rehabilitation: Another View.
ERIC Educational Resources Information Center
Rahav, Giora
1980-01-01
In a secondary analysis of the Abrams and Siegel evaluation of the Transcendental Meditation program at Folsom prison, experimental groups fared better than control groups, although the treatment explains only a small proportion of the variance. (Author)
Variance adaptation in navigational decision making
NASA Astrophysics Data System (ADS)
Gershow, Marc; Gepner, Ruben; Wolk, Jason; Wadekar, Digvijay
Drosophila larvae navigate their environments using a biased random walk strategy. A key component of this strategy is the decision to initiate a turn (change direction) in response to declining conditions. We modeled this decision as the output of a Linear-Nonlinear-Poisson cascade and used reverse correlation with visual and fictive olfactory stimuli to find the parameters of this model. Because the larva responds to changes in stimulus intensity, we used stimuli with uncorrelated normally distributed intensity derivatives, i.e. Brownian processes, and took the stimulus derivative as the input to our LNP cascade. In this way, we were able to present stimuli with 0 mean and controlled variance. We found that the nonlinear rate function depended on the variance in the stimulus input, allowing larvae to respond more strongly to small changes in low-noise compared to high-noise environments. We measured the rate at which the larva adapted its behavior following changes in stimulus variance, and found that larvae adapted more quickly to increases in variance than to decreases, consistent with the behavior of an optimal Bayes estimator. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.
Maximally Informative Stimuli and Tuning Curves for Sigmoidal Rate-Coding Neurons and Populations
NASA Astrophysics Data System (ADS)
McDonnell, Mark D.; Stocks, Nigel G.
2008-08-01
A general method for deriving maximally informative sigmoidal tuning curves for neural systems with small normalized variability is presented. The optimal tuning curve is a nonlinear function of the cumulative distribution function of the stimulus and depends on the mean-variance relationship of the neural system. The derivation is based on a known relationship between Shannon’s mutual information and Fisher information, and the optimality of Jeffrey’s prior. It relies on the existence of closed-form solutions to the converse problem of optimizing the stimulus distribution for a given tuning curve. It is shown that maximum mutual information corresponds to constant Fisher information only if the stimulus is uniformly distributed. As an example, the case of sub-Poisson binomial firing statistics is analyzed in detail.
Mud crab ecology encourages site-specific approaches to fishery management
NASA Astrophysics Data System (ADS)
Dumas, P.; Léopold, M.; Frotté, L.; Peignon, C.
2012-01-01
Little is known about the effects of mud crabs population patterns on their exploitation. We used complementary approaches (experimental, fisher-based) to investigate how small-scale variations in density, size and sex-ratio related to the ecology of S. serrata may impact fishing practices in New Caledonia. Crabs were measured/sexed across 9 stations in contrasted mangrove systems between 2007 and 2009. Stations were described and classified in different kinds of mangrove forests (coastal, riverine, and estuarine); vegetation cover was qualitatively described at station scale. Annual catch was used as an indicator of fishing pressure. Middle-scale environmental factors (oceanic influence, vegetation cover) had significant contributions to crab density (GLM, 84.8% of variance), crab size and sex-ratio (< 30%). While small-scale natural factors contributed significantly to population structure, current fishing levels had no impacts on mud crabs. The observed, ecologically-driven heterogeneity of crab resource has strong social implications in the Pacific area, where land tenure system and traditional access rights prevent most fishers from freely selecting their harvest zones. This offers a great opportunity to encourage site-specific management of mud crab fisheries.
ERIC Educational Resources Information Center
Tipton, Elizabeth; Pustejovsky, James E.
2015-01-01
Randomized experiments are commonly used to evaluate the effectiveness of educational interventions. The goal of the present investigation is to develop small-sample corrections for multiple contrast hypothesis tests (i.e., F-tests) such as the omnibus test of meta-regression fit or a test for equality of three or more levels of a categorical…
Systems Engineering Programmatic Estimation Using Technology Variance
NASA Technical Reports Server (NTRS)
Mog, Robert A.
2000-01-01
Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed on the subsystems and components comprising the system of interest. Technological "return" and "variation" parameters are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.
A comparison between different error modeling of MEMS applied to GPS/INS integrated systems.
Quinchia, Alex G; Falco, Gianluca; Falletti, Emanuela; Dovis, Fabio; Ferrer, Carles
2013-07-24
Advances in the development of micro-electromechanical systems (MEMS) have made possible the fabrication of cheap and small dimension accelerometers and gyroscopes, which are being used in many applications where the global positioning system (GPS) and the inertial navigation system (INS) integration is carried out, i.e., identifying track defects, terrestrial and pedestrian navigation, unmanned aerial vehicles (UAVs), stabilization of many platforms, etc. Although these MEMS sensors are low-cost, they present different errors, which degrade the accuracy of the navigation systems in a short period of time. Therefore, a suitable modeling of these errors is necessary in order to minimize them and, consequently, improve the system performance. In this work, the most used techniques currently to analyze the stochastic errors that affect these sensors are shown and compared: we examine in detail the autocorrelation, the Allan variance (AV) and the power spectral density (PSD) techniques. Subsequently, an analysis and modeling of the inertial sensors, which combines autoregressive (AR) filters and wavelet de-noising, is also achieved. Since a low-cost INS (MEMS grade) presents error sources with short-term (high-frequency) and long-term (low-frequency) components, we introduce a method that compensates for these error terms by doing a complete analysis of Allan variance, wavelet de-nosing and the selection of the level of decomposition for a suitable combination between these techniques. Eventually, in order to assess the stochastic models obtained with these techniques, the Extended Kalman Filter (EKF) of a loosely-coupled GPS/INS integration strategy is augmented with different states. Results show a comparison between the proposed method and the traditional sensor error models under GPS signal blockages using real data collected in urban roadways.
A Comparison between Different Error Modeling of MEMS Applied to GPS/INS Integrated Systems
Quinchia, Alex G.; Falco, Gianluca; Falletti, Emanuela; Dovis, Fabio; Ferrer, Carles
2013-01-01
Advances in the development of micro-electromechanical systems (MEMS) have made possible the fabrication of cheap and small dimension accelerometers and gyroscopes, which are being used in many applications where the global positioning system (GPS) and the inertial navigation system (INS) integration is carried out, i.e., identifying track defects, terrestrial and pedestrian navigation, unmanned aerial vehicles (UAVs), stabilization of many platforms, etc. Although these MEMS sensors are low-cost, they present different errors, which degrade the accuracy of the navigation systems in a short period of time. Therefore, a suitable modeling of these errors is necessary in order to minimize them and, consequently, improve the system performance. In this work, the most used techniques currently to analyze the stochastic errors that affect these sensors are shown and compared: we examine in detail the autocorrelation, the Allan variance (AV) and the power spectral density (PSD) techniques. Subsequently, an analysis and modeling of the inertial sensors, which combines autoregressive (AR) filters and wavelet de-noising, is also achieved. Since a low-cost INS (MEMS grade) presents error sources with short-term (high-frequency) and long-term (low-frequency) components, we introduce a method that compensates for these error terms by doing a complete analysis of Allan variance, wavelet de-nosing and the selection of the level of decomposition for a suitable combination between these techniques. Eventually, in order to assess the stochastic models obtained with these techniques, the Extended Kalman Filter (EKF) of a loosely-coupled GPS/INS integration strategy is augmented with different states. Results show a comparison between the proposed method and the traditional sensor error models under GPS signal blockages using real data collected in urban roadways. PMID:23887084
Body Composition and Somatotype of Male and Female Nordic Skiers
ERIC Educational Resources Information Center
Sinning, Wayne E.; And Others
1977-01-01
Anthropometric measurements (body composition and somatotype characteristics) for male and female Nordic skiers showed small values for measures of variance, suggesting that the subjects represented a select body type for the sport. (Author/MJB)
Strategies for Selecting Crosses Using Genomic Prediction in Two Wheat Breeding Programs.
Lado, Bettina; Battenfield, Sarah; Guzmán, Carlos; Quincke, Martín; Singh, Ravi P; Dreisigacker, Susanne; Peña, R Javier; Fritz, Allan; Silva, Paula; Poland, Jesse; Gutiérrez, Lucía
2017-07-01
The single most important decision in plant breeding programs is the selection of appropriate crosses. The ideal cross would provide superior predicted progeny performance and enough diversity to maintain genetic gain. The aim of this study was to compare the best crosses predicted using combinations of mid-parent value and variance prediction accounting for linkage disequilibrium (V) or assuming linkage equilibrium (V). After predicting the mean and the variance of each cross, we selected crosses based on mid-parent value, the top 10% of the progeny, and weighted mean and variance within progenies for grain yield, grain protein content, mixing time, and loaf volume in two applied wheat ( L.) breeding programs: Instituto Nacional de Investigación Agropecuaria (INIA) Uruguay and CIMMYT Mexico. Although the variance of the progeny is important to increase the chances of finding superior individuals from transgressive segregation, we observed that the mid-parent values of the crosses drove the genetic gain but the variance of the progeny had a small impact on genetic gain for grain yield. However, the relative importance of the variance of the progeny was larger for quality traits. Overall, the genomic resources and the statistical models are now available to plant breeders to predict both the performance of breeding lines per se as well as the value of progeny from any potential crosses. Copyright © 2017 Crop Science Society of America.
Microstructure of the IMF turbulences at 2.5 AU
NASA Technical Reports Server (NTRS)
Mavromichalaki, H.; Vassilaki, A.; Marmatsouri, L.; Moussas, X.; Quenby, J. J.; Smith, E. J.
1995-01-01
A detailed analysis of small period (15-900 sec) magnetohydrodynamic (MHD) turbulences of the interplanetary magnetic field (IMF) has been made using Pioneer-11 high time resolution data (0.75 sec) inside a Corotating Interaction Region (CIR) at a heliocentric distance of 2.5 AU in 1973. The methods used are the hodogram analysis, the minimum variance matrix analysis and the cohenrence analysis. The minimum variance analysis gives evidence of linear polarized wave modes. Coherence analysis has shown that the field fluctuations are dominated by the magnetosonic fast modes with periods 15 sec to 15 min. However, it is also shown that some small amplitude Alfven waves are present in the trailing edge of this region with characteristic periods (15-200 sec). The observed wave modes are locally generated and possibly attributed to the scattering of Alfven waves energy into random magnetosonic waves.
Mean-Variance Hedging on Uncertain Time Horizon in a Market with a Jump
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kharroubi, Idris, E-mail: kharroubi@ceremade.dauphine.fr; Lim, Thomas, E-mail: lim@ensiie.fr; Ngoupeyou, Armand, E-mail: armand.ngoupeyou@univ-paris-diderot.fr
2013-12-15
In this work, we study the problem of mean-variance hedging with a random horizon T∧τ, where T is a deterministic constant and τ is a jump time of the underlying asset price process. We first formulate this problem as a stochastic control problem and relate it to a system of BSDEs with a jump. We then provide a verification theorem which gives the optimal strategy for the mean-variance hedging using the solution of the previous system of BSDEs. Finally, we prove that this system of BSDEs admits a solution via a decomposition approach coming from filtration enlargement theory.
Optimal design criteria - prediction vs. parameter estimation
NASA Astrophysics Data System (ADS)
Waldl, Helmut
2014-05-01
G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.
The effects of small amounts of H2O on partial melting of model spinel lherzolite in the system CMAS
NASA Astrophysics Data System (ADS)
Liu, X.; St. C. Oneill, H.
2003-04-01
Water (H_2O) is so effective at lowering the solidus temperatures of silicate systems that even small amounts of H_2O are suspected to be important in the genesis of basaltic magmas. The realization that petrologically significant amounts of H_2O can be stored in nominally anhydrous mantle minerals (olivine and pyroxenes) has fundamental implications for the understanding of partial melting in the mantle, for it implies that the role that H_2O plays in mantle melting may not be appropriately described by models in which the melting is controlled by hydrous phases such as amphibole. Although the effect of water in suppressing the liquidus during crystallization is quite well understood, such observations do not provide direct quantitative information on the solidus. This is because liquidus crystallization occurs at constant major-element composition of the system, but at unbuffered component activities (high thermodynamic variance). By contrast, for partial melting at the solidus the major-element component activities are buffered by the coexisting crystalline phases (low variance), but the major-element composition of the melt can change as a function of added H_2O. Accordingly we have determined both the solidus temperature and the melt composition in the system CMAS with small additions of H_2O, to 4 wt%, in equilibrium with the four-phase lherzolite assemblage of fo+opx+cpx+sp. Experiments were conducted at 1.1 GPa and temperatures from 1473 K to the dry solidus at 1593 K in a piston-cylinder apparatus. Starting materials were pre-synthesised assemblage of fo+opx+cpx+sp, plus an oxide/hydroxide mix of approximately the anticipated melt composition. H_2O was added as either Mg(OH)_2 or Al(OH)_3. The crystalline assemblage and melt starting mix were added as separate layers inside sealed Pt capsules, to ensure large volumes of crystal-free melt. After the run doubly polished sections were prepared in order to analyse the quenched melt by FTIR spectroscopy, to quantify the amounts of H_2O. This is necessary, as Pt capsules are to some extent open to H_2 diffusion. All melts were found to contain CO_2 (<0.7 wt%), which appears to come mainly from the hydroxide starting materials but also by C diffusion through the Pt capsule. Since CO_2 is experimentally correlated with H_2O, its presence significantly effects the interpretation of the results. Ignoring this complication, we find that 1 wt% H_2O decreases the solidus by ˜40 K; melt compositions do not change greatly, the main effect being a small decrease in MgO.
Genomic Prediction Accounting for Residual Heteroskedasticity
Ou, Zhining; Tempelman, Robert J.; Steibel, Juan P.; Ernst, Catherine W.; Bates, Ronald O.; Bello, Nora M.
2015-01-01
Whole-genome prediction (WGP) models that use single-nucleotide polymorphism marker information to predict genetic merit of animals and plants typically assume homogeneous residual variance. However, variability is often heterogeneous across agricultural production systems and may subsequently bias WGP-based inferences. This study extends classical WGP models based on normality, heavy-tailed specifications and variable selection to explicitly account for environmentally-driven residual heteroskedasticity under a hierarchical Bayesian mixed-models framework. WGP models assuming homogeneous or heterogeneous residual variances were fitted to training data generated under simulation scenarios reflecting a gradient of increasing heteroskedasticity. Model fit was based on pseudo-Bayes factors and also on prediction accuracy of genomic breeding values computed on a validation data subset one generation removed from the simulated training dataset. Homogeneous vs. heterogeneous residual variance WGP models were also fitted to two quantitative traits, namely 45-min postmortem carcass temperature and loin muscle pH, recorded in a swine resource population dataset prescreened for high and mild residual heteroskedasticity, respectively. Fit of competing WGP models was compared using pseudo-Bayes factors. Predictive ability, defined as the correlation between predicted and observed phenotypes in validation sets of a five-fold cross-validation was also computed. Heteroskedastic error WGP models showed improved model fit and enhanced prediction accuracy compared to homoskedastic error WGP models although the magnitude of the improvement was small (less than two percentage points net gain in prediction accuracy). Nevertheless, accounting for residual heteroskedasticity did improve accuracy of selection, especially on individuals of extreme genetic merit. PMID:26564950
The use of spatio-temporal correlation to forecast critical transitions
NASA Astrophysics Data System (ADS)
Karssenberg, Derek; Bierkens, Marc F. P.
2010-05-01
Complex dynamical systems may have critical thresholds at which the system shifts abruptly from one state to another. Such critical transitions have been observed in systems ranging from the human body system to financial markets and the Earth system. Forecasting the timing of critical transitions before they are reached is of paramount importance because critical transitions are associated with a large shift in dynamical regime of the system under consideration. However, it is hard to forecast critical transitions, because the state of the system shows relatively little change before the threshold is reached. Recently, it was shown that increased spatio-temporal autocorrelation and variance can serve as alternative early warning signal for critical transitions. However, thus far these second order statistics have not been used for forecasting in a data assimilation framework. Here we show that the use of spatio-temporal autocorrelation and variance in the state of the system reduces the uncertainty in the predicted timing of critical transitions compared to classical approaches that use the value of the system state only. This is shown by assimilating observed spatio-temporal autocorrelation and variance into a dynamical system model using a Particle Filter. We adapt a well-studied distributed model of a logistically growing resource with a fixed grazing rate. The model describes the transition from an underexploited system with high resource biomass to overexploitation as grazing pressure crosses the critical threshold, which is a fold bifurcation. To represent limited prior information, we use a large variance in the prior probability distributions of model parameters and the system driver (grazing rate). First, we show that the rate of increase in spatio-temporal autocorrelation and variance prior to reaching the critical threshold is relatively consistent across the uncertainty range of the driver and parameter values used. This indicates that an increase in spatio-temporal autocorrelation and variance are consistent predictors of a critical transition, even under the condition of a poorly defined system. Second, we perform data assimilation experiments using an artificial exhaustive data set generated by one realization of the model. To mimic real-world sampling, an observational data set is created from this exhaustive data set. This is done by sampling on a regular spatio-temporal grid, supplemented by sampling locations at a short distance. Spatial and temporal autocorrelation in this observational data set is calculated for different spatial and temporal separation (lag) distances. To assign appropriate weights to observations (here, autocorrelation values and variance) in the Particle Filter, the covariance matrix of the error in these observations is required. This covariance matrix is estimated using Monte Carlo sampling, selecting a different random position of the sampling network relative to the exhaustive data set for each realization. At each update moment in the Particle Filter, observed autocorrelation values are assimilated into the model and the state of the model is updated. Using this approach, it is shown that the use of autocorrelation reduces the uncertainty in the forecasted timing of a critical transition compared to runs without data assimilation. The performance of the use of spatial autocorrelation versus temporal autocorrelation depends on the timing and number of observational data. This study is restricted to a single model only. However, it is becoming increasingly clear that spatio-temporal autocorrelation and variance can be used as early warning signals for a large number of systems. Thus, it is expected that spatio-temporal autocorrelation and variance are valuable in data assimilation frameworks in a large number of dynamical systems.
The effect of noise-induced variance on parameter recovery from reaction times.
Vadillo, Miguel A; Garaizar, Pablo
2016-03-31
Technical noise can compromise the precision and accuracy of the reaction times collected in psychological experiments, especially in the case of Internet-based studies. Although this noise seems to have only a small impact on traditional statistical analyses, its effects on model fit to reaction-time distributions remains unexplored. Across four simulations we study the impact of technical noise on parameter recovery from data generated from an ex-Gaussian distribution and from a Ratcliff Diffusion Model. Our results suggest that the impact of noise-induced variance tends to be limited to specific parameters and conditions. Although we encourage researchers to adopt all measures to reduce the impact of noise on reaction-time experiments, we conclude that the typical amount of noise-induced variance found in these experiments does not pose substantial problems for statistical analyses based on model fitting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, Paul J.; Pineda Flores, Sergio D.; Neuscamman, Eric
In the regime where traditional approaches to electronic structure cannot afford to achieve accurate energy differences via exhaustive wave function flexibility, rigorous approaches to balancing different states’ accuracies become desirable. As a direct measure of a wave function’s accuracy, the energy variance offers one route to achieving such a balance. Here, we develop and test a variance matching approach for predicting excitation energies within the context of variational Monte Carlo and selective configuration interaction. In a series of tests on small but difficult molecules, we demonstrate that the approach it is effective at delivering accurate excitation energies when the wavemore » function is far from the exhaustive flexibility limit. Results in C3, where we combine this approach with variational Monte Carlo orbital optimization, are especially encouraging.« less
NASA Astrophysics Data System (ADS)
Iida, S.
1991-03-01
Using statistical scattering theory, we calculate the average and the variance of the conductance coefficients at zero temperature for a small disordered metallic wire composed of three arms. Each arm is coupled at the end to a perfectly conducting lead. The disorder is modeled by a microscopic random Hamiltonian belonging to the Gaussian orthogonal ensemble. As the coupling strength of the third arm (voltage probe) is increased, the variance of the conductance coefficient of the main track changes from the universal value of the two-lead geometry to that of the three-lead geometry. The variance of the resistance coefficient is strongly affected by the coupling strength of the arm whose resistance is being measured and has a relatively weak dependence on those of the other two arms.
Robinson, Paul J.; Pineda Flores, Sergio D.; Neuscamman, Eric
2017-10-28
In the regime where traditional approaches to electronic structure cannot afford to achieve accurate energy differences via exhaustive wave function flexibility, rigorous approaches to balancing different states’ accuracies become desirable. As a direct measure of a wave function’s accuracy, the energy variance offers one route to achieving such a balance. Here, we develop and test a variance matching approach for predicting excitation energies within the context of variational Monte Carlo and selective configuration interaction. In a series of tests on small but difficult molecules, we demonstrate that the approach it is effective at delivering accurate excitation energies when the wavemore » function is far from the exhaustive flexibility limit. Results in C3, where we combine this approach with variational Monte Carlo orbital optimization, are especially encouraging.« less
Betzig, Laura
2014-03-01
For more than 100,000 years, H. sapiens lived as foragers, in small family groups with low reproductive variance. A minority of men were able to father children by two or three women; and a majority of men and women were able to breed. But after the origin of farming around 10,000 years ago, reproductive variance increased. In civilizations which began in Mesopotamia, Egypt, India, and China, and then moved on to Greece and Rome, kings collected thousands of women, whose children were supported and guarded by thousands of eunuchs. Just a few hundred years ago, that trend reversed. Obligate sterility ended, and reproductive variance declined. For H. sapiens, as for other organisms, eusociality seems to be an effect of ecological constraints. Civilizations rose up in lake and river valleys, hemmed in by mountains and deserts. Egalitarianism became an option after empty habitats opened up.
NASA Astrophysics Data System (ADS)
Laube, G.; Schmidt, C.; Fleckenstein, J. H.
2014-12-01
The hyporheic zone (HZ) contributes significantly to whole stream biogeochemical cycling. Biogeochemical reactions within the HZ are often transport limited, thus, understanding these reactions requires knowledge about the magnitude of hyporheic fluxes (HF) and the residence time (RT) of these fluxes within the HZ. While the hydraulics of HF are relatively well understood, studies addressing the influence of permeability heterogeneity lack systematic analysis and have even produced contradictory results (e.g. [1] vs. [2]). In order to close this gap, this study uses a statistical numerical approach to elucidate the influence of permeability heterogeneity on HF and RT. We simulated and evaluated 3750 2D-scenarios of sediment heterogeneity by means of Gaussian random fields with focus on total HF and RT distribution. The scenarios were based on ten realizations of each of all possible combinations of 15 different correlation lengths, 5 dipping angles and 5 permeability variances. Roughly 500 hyporheic stream traces were analyzed per simulation, for a total of almost two million stream traces analyzed for correlations between permeability heterogeneity, HF, and RT. Total HF and the RT variance positively correlated with permeability variance while the mean RT negatively correlated with permeability variance. In contrast, changes in correlation lengths and dipping angles had little effect on the examined properties RT and HF. These results provide a possible explanation of the seemingly contradictory conclusions of recent studies, given that the permeability variances in these studies differ by several orders of magnitude. [1] Bardini, L., Boano, F., Cardenas, M.B, Sawyer, A.H, Revelli, R. and Ridolfi, L. "Small-Scale Permeability Heterogeneity Has Negligible Effects on Nutrient Cycling in Streambeds." Geophysical Research Letters, 2013. doi:10.1002/grl.50224. [2] Zhou, Y., Ritzi, R. W., Soltanian, M. R. and Dominic, D. F. "The Influence of Streambed Heterogeneity on Hyporheic Flow in Gravelly Rivers." Groundwater, 2013. doi:10.1111/gwat.12048.
Zhang, Ge; Karns, Rebekah; Sun, Guangyun; Indugula, Subba Rao; Cheng, Hong; Havas-Augustin, Dubravka; Novokmet, Natalija; Durakovic, Zijad; Missoni, Sasa; Chakraborty, Ranajit; Rudan, Pavao; Deka, Ranjan
2012-01-01
Genome-wide association studies (GWAS) have identified many common variants associated with complex traits in human populations. Thus far, most reported variants have relatively small effects and explain only a small proportion of phenotypic variance, leading to the issues of 'missing' heritability and its explanation. Using height as an example, we examined two possible sources of missing heritability: first, variants with smaller effects whose associations with height failed to reach genome-wide significance and second, allelic heterogeneity due to the effects of multiple variants at a single locus. Using a novel analytical approach we examined allelic heterogeneity of height-associated loci selected from SNPs of different significance levels based on the summary data of the GIANT (stage 1) studies. In a sample of 1,304 individuals collected from an island population of the Adriatic coast of Croatia, we assessed the extent of height variance explained by incorporating the effects of less significant height loci and multiple effective SNPs at the same loci. Our results indicate that approximately half of the 118 loci that achieved stringent genome-wide significance (p-value<5×10(-8)) showed evidence of allelic heterogeneity. Additionally, including less significant loci (i.e., p-value<5×10(-4)) and accounting for effects of allelic heterogeneity substantially improved the variance explained in height.
Small Area Variance Estimation for the Siuslaw NF in Oregon and Some Results
S. Lin; D. Boes; H.T. Schreuder
2006-01-01
The results of a small area prediction study for the Siuslaw National Forest in Oregon are presented. Predictions were made for total basal area, number of trees and mortality per ha on a 0.85 mile grid using data on a 1.7 mile grid and additional ancillary information from TM. A reliable method of estimating prediction errors for individual plot predictions called the...
Smooth empirical Bayes estimation of observation error variances in linear systems
NASA Technical Reports Server (NTRS)
Martz, H. F., Jr.; Lian, M. W.
1972-01-01
A smooth empirical Bayes estimator was developed for estimating the unknown random scale component of each of a set of observation error variances. It is shown that the estimator possesses a smaller average squared error loss than other estimators for a discrete time linear system.
The distribution of genetic variance across phenotypic space and the response to selection.
Blows, Mark W; McGuigan, Katrina
2015-05-01
The role of adaptation in biological invasions will depend on the availability of genetic variation for traits under selection in the new environment. Although genetic variation is present for most traits in most populations, selection is expected to act on combinations of traits, not individual traits in isolation. The distribution of genetic variance across trait combinations can be characterized by the empirical spectral distribution of the genetic variance-covariance (G) matrix. Empirical spectral distributions of G from a range of trait types and taxa all exhibit a characteristic shape; some trait combinations have large levels of genetic variance, while others have very little genetic variance. In this study, we review what is known about the empirical spectral distribution of G and show how it predicts the response to selection across phenotypic space. In particular, trait combinations that form a nearly null genetic subspace with little genetic variance respond only inconsistently to selection. We go on to set out a framework for understanding how the empirical spectral distribution of G may differ from the random expectations that have been developed under random matrix theory (RMT). Using a data set containing a large number of gene expression traits, we illustrate how hypotheses concerning the distribution of multivariate genetic variance can be tested using RMT methods. We suggest that the relative alignment between novel selection pressures during invasion and the nearly null genetic subspace is likely to be an important component of the success or failure of invasion, and for the likelihood of rapid adaptation in small populations in general. © 2014 John Wiley & Sons Ltd.
One-shot estimate of MRMC variance: AUC.
Gallas, Brandon D
2006-03-01
One popular study design for estimating the area under the receiver operating characteristic curve (AUC) is the one in which a set of readers reads a set of cases: a fully crossed design in which every reader reads every case. The variability of the subsequent reader-averaged AUC has two sources: the multiple readers and the multiple cases (MRMC). In this article, we present a nonparametric estimate for the variance of the reader-averaged AUC that is unbiased and does not use resampling tools. The one-shot estimate is based on the MRMC variance derived by the mechanistic approach of Barrett et al. (2005), as well as the nonparametric variance of a single-reader AUC derived in the literature on U statistics. We investigate the bias and variance properties of the one-shot estimate through a set of Monte Carlo simulations with simulated model observers and images. The different simulation configurations vary numbers of readers and cases, amounts of image noise and internal noise, as well as how the readers are constructed. We compare the one-shot estimate to a method that uses the jackknife resampling technique with an analysis of variance model at its foundation (Dorfman et al. 1992). The name one-shot highlights that resampling is not used. The one-shot and jackknife estimators behave similarly, with the one-shot being marginally more efficient when the number of cases is small. We have derived a one-shot estimate of the MRMC variance of AUC that is based on a probabilistic foundation with limited assumptions, is unbiased, and compares favorably to an established estimate.
ERIC Educational Resources Information Center
Torre, Kjerstin; Balasubramaniam, Ramesh
2011-01-01
We address the complex relationship between the stability, variability, and adaptability of psychological systems by decomposing the global variance of serial performance into two independent parts: the local variance (LV) and the serial correlation structure. For two time series with equal LV, the presence of persistent long-range correlations…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-15
... Public Service Company; Notice of Application for Temporary Variance of License Article 403 and...: Extension of temporary variance of license article 403. b. Project No: 12514-056. c. Date Filed: November 28... submit brief comments up to 6,000 characters, without prior registration, using the eComment system at...
Fritts, Andrea; Knights, Brent C.; Lafrancois, Toben D.; Bartsch, Lynn; Vallazza, Jon; Bartsch, Michelle; Richardson, William B.; Karns, Byron N.; Bailey, Sean; Kreiling, Rebecca
2018-01-01
Fatty acid and stable isotope signatures allow researchers to better understand food webs, food sources, and trophic relationships. Research in marine and lentic systems has indicated that the variance of these biomarkers can exhibit substantial differences across spatial and temporal scales, but this type of analysis has not been completed for large river systems. Our objectives were to evaluate variance structures for fatty acids and stable isotopes (i.e. δ13C and δ15N) of seston, threeridge mussels, hydropsychid caddisflies, gizzard shad, and bluegill across spatial scales (10s-100s km) in large rivers of the Upper Mississippi River Basin, USA that were sampled annually for two years, and to evaluate the implications of this variance on the design and interpretation of trophic studies. The highest variance for both isotopes was present at the largest spatial scale for all taxa (except seston δ15N) indicating that these isotopic signatures are responding to factors at a larger geographic level rather than being influenced by local-scale alterations. Conversely, the highest variance for fatty acids was present at the smallest spatial scale (i.e. among individuals) for all taxa except caddisflies, indicating that the physiological and metabolic processes that influence fatty acid profiles can differ substantially between individuals at a given site. Our results highlight the need to consider the spatial partitioning of variance during sample design and analysis, as some taxa may not be suitable to assess ecological questions at larger spatial scales.
Khan, Adil Mehmood; Siddiqi, Muhammad Hameed; Lee, Seok-Won
2013-09-27
Smartphone-based activity recognition (SP-AR) recognizes users' activities using the embedded accelerometer sensor. Only a small number of previous works can be classified as online systems, i.e., the whole process (pre-processing, feature extraction, and classification) is performed on the device. Most of these online systems use either a high sampling rate (SR) or long data-window (DW) to achieve high accuracy, resulting in short battery life or delayed system response, respectively. This paper introduces a real-time/online SP-AR system that solves this problem. Exploratory data analysis was performed on acceleration signals of 6 activities, collected from 30 subjects, to show that these signals are generated by an autoregressive (AR) process, and an accurate AR-model in this case can be built using a low SR (20 Hz) and a small DW (3 s). The high within class variance resulting from placing the phone at different positions was reduced using kernel discriminant analysis to achieve position-independent recognition. Neural networks were used as classifiers. Unlike previous works, true subject-independent evaluation was performed, where 10 new subjects evaluated the system at their homes for 1 week. The results show that our features outperformed three commonly used features by 40% in terms of accuracy for the given SR and DW.
NASA Astrophysics Data System (ADS)
Webster, S.; Hardi, J.; Oschwald, M.
2015-03-01
The influence of injection conditions on rocket engine combustion stability is investigated for a sub-scale combustion chamber with shear coaxial injection elements and the propellant combination hydrogen-oxygen. The experimental results presented are from a series of tests conducted at subcritical and supercritical pressures for oxygen and for both ambient and cryogenic temperature hydrogen. The stability of the system is characterised by the root mean squared amplitude of dynamic combustion chamber pressure in the upper part of the acoustic spectrum relevant for high frequency combustion instabilities. Results are presented for both unforced and externally forced combustion chamber configurations. It was found that, for both the unforced and externally forced configurations, the injection velocity had the strongest influence on combustion chamber stability. Through the use of multivariate linear regression the influence of hydrogen injection temperature and hydrogen injection mass flow rate were best able to explain the variance in stability for dependence on injection velocity ratio. For unforced tests turbulent jet noise from injection was found to dominate the energy content of the signal. For the externally forced configuration a non-linear regression model was better able to predict the variance, suggesting the influence of non-linear behaviour. The response of the system to variation of injection conditions was found to be small; suggesting that the combustion chamber investigated in the experiment is highly stable.
RR-Interval variance of electrocardiogram for atrial fibrillation detection
NASA Astrophysics Data System (ADS)
Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.
2016-11-01
Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.
Population sexual behavior and HIV prevalence in Sub-Saharan Africa: missing links?
Omori, Ryosuke; Abu-Raddad, Laith J
2016-03-01
Patterns of sexual partnering should shape HIV transmission in human populations. The objective of this study was to assess empirical associations between population casual sex behavior and HIV prevalence, and between different measures of casual sex behavior. An ecological study design was applied to nationally representative data, those of the Demographic and Health Surveys, in 25 countries of Sub-Saharan Africa. Spearman rank correlation was used to assess different correlations for males and females and their statistical significance. Correlations between HIV prevalence and means and variances of the number of casual sex partners were positive, but small and statistically insignificant. The majority of correlations across means and variances of the number of casual sex partners were positive, large, and statistically significant. However, all correlations between the means, as well as variances, and the variance of unmarried females were weak and statistically insignificant. Population sexual behavior was not predictive of HIV prevalence across these countries. Nevertheless, the strong correlations across means and variances of sexual behavior suggest that self-reported sexual data are self-consistent and convey valid information content. Unmarried female behavior seemed puzzling, but could be playing an influential role in HIV transmission patterns. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Analysis of stimulus-related activity in rat auditory cortex using complex spectral coefficients
Krause, Bryan M.
2013-01-01
The neural mechanisms of sensory responses recorded from the scalp or cortical surface remain controversial. Evoked vs. induced response components (i.e., changes in mean vs. variance) are associated with bottom-up vs. top-down processing, but trial-by-trial response variability can confound this interpretation. Phase reset of ongoing oscillations has also been postulated to contribute to sensory responses. In this article, we present evidence that responses under passive listening conditions are dominated by variable evoked response components. We measured the mean, variance, and phase of complex time-frequency coefficients of epidurally recorded responses to acoustic stimuli in rats. During the stimulus, changes in mean, variance, and phase tended to co-occur. After the stimulus, there was a small, low-frequency offset response in the mean and modest, prolonged desynchronization in the alpha band. Simulations showed that trial-by-trial variability in the mean can account for most of the variance and phase changes observed during the stimulus. This variability was state dependent, with smallest variability during periods of greatest arousal. Our data suggest that cortical responses to auditory stimuli reflect variable inputs to the cortical network. These analyses suggest that caution should be exercised when interpreting variance and phase changes in terms of top-down cortical processing. PMID:23657279
Fuel cell stack monitoring and system control
Keskula, Donald H.; Doan, Tien M.; Clingerman, Bruce J.
2005-01-25
A control method for monitoring a fuel cell stack in a fuel cell system in which the actual voltage and actual current from the fuel cell stack are monitored. A preestablished relationship between voltage and current over the operating range of the fuel cell is established. A variance value between the actual measured voltage and the expected voltage magnitude for a given actual measured current is calculated and compared with a predetermined allowable variance. An output is generated if the calculated variance value exceeds the predetermined variance. The predetermined voltage-current for the fuel cell is symbolized as a polarization curve at given operating conditions of the fuel cell. Other polarization curves may be generated and used for fuel cell stack monitoring based on different operating pressures, temperatures, hydrogen quantities.
Statistics of Lyapunov exponents of quasi-one-dimensional disordered systems
NASA Astrophysics Data System (ADS)
Zhang, Yan-Yang; Xiong, Shi-Jie
2005-10-01
Statistical properties of Lyapunov exponents (LE) are numerically calculated in a quasi-one-dimensional (1D) Anderson model, which is in a 2D or 3D lattice with a finite cross section. The single-parameter scaling (SPS) variable τ relating the Lyapunov exponents γ and their variances σ by τ≡σ2L/⟨γ⟩ is calculated for different lateral coupling t and disorder strength W . In a wide range of t , τ is approximately independent of W , but it has different values for LEs in different channels. For small t , the distribution of the smallest LE is non-Gaussian and τ strongly depends on W , remarkably different from the 1D SPS hypothesis.
Areal Control Using Generalized Least Squares As An Alternative to Stratification
Raymond L. Czaplewski
2001-01-01
Stratification for both variance reduction and areal control proliferates the number of strata, which causes small sample sizes in many strata. This might compromise statistical efficiency. Generalized least squares can, in principle, replace stratification for areal control.
Space-Time Smoothing of Complex Survey Data: Small Area Estimation for Child Mortality
Mercer, Laina D; Wakefield, Jon; Pantazis, Athena; Lutambi, Angelina M; Masanja, Honorati; Clark, Samuel
2016-01-01
Many people living in low and middle-income countries are not covered by civil registration and vital statistics systems. Consequently, a wide variety of other types of data including many household sample surveys are used to estimate health and population indicators. In this paper we combine data from sample surveys and demographic surveillance systems to produce small area estimates of child mortality through time. Small area estimates are necessary to understand geographical heterogeneity in health indicators when full-coverage vital statistics are not available. For this endeavor spatio-temporal smoothing is beneficial to alleviate problems of data sparsity. The use of conventional hierarchical models requires careful thought since the survey weights may need to be considered to alleviate bias due to non-random sampling and non-response. The application that motivated this work is estimation of child mortality rates in five-year time intervals in regions of Tanzania. Data come from Demographic and Health Surveys conducted over the period 1991–2010 and two demographic surveillance system sites. We derive a variance estimator of under five years child mortality that accounts for the complex survey weighting. For our application, the hierarchical models we consider include random effects for area, time and survey and we compare models using a variety of measures including the conditional predictive ordinate (CPO). The method we propose is implemented via the fast and accurate integrated nested Laplace approximation (INLA). PMID:27468328
Direct and indirect genetic and fine-scale location effects on breeding date in song sparrows.
Germain, Ryan R; Wolak, Matthew E; Arcese, Peter; Losdat, Sylvain; Reid, Jane M
2016-11-01
Quantifying direct and indirect genetic effects of interacting females and males on variation in jointly expressed life-history traits is central to predicting microevolutionary dynamics. However, accurately estimating sex-specific additive genetic variances in such traits remains difficult in wild populations, especially if related individuals inhabit similar fine-scale environments. Breeding date is a key life-history trait that responds to environmental phenology and mediates individual and population responses to environmental change. However, no studies have estimated female (direct) and male (indirect) additive genetic and inbreeding effects on breeding date, and estimated the cross-sex genetic correlation, while simultaneously accounting for fine-scale environmental effects of breeding locations, impeding prediction of microevolutionary dynamics. We fitted animal models to 38 years of song sparrow (Melospiza melodia) phenology and pedigree data to estimate sex-specific additive genetic variances in breeding date, and the cross-sex genetic correlation, thereby estimating the total additive genetic variance while simultaneously estimating sex-specific inbreeding depression. We further fitted three forms of spatial animal model to explicitly estimate variance in breeding date attributable to breeding location, overlap among breeding locations and spatial autocorrelation. We thereby quantified fine-scale location variances in breeding date and quantified the degree to which estimating such variances affected the estimated additive genetic variances. The non-spatial animal model estimated nonzero female and male additive genetic variances in breeding date (sex-specific heritabilities: 0·07 and 0·02, respectively) and a strong, positive cross-sex genetic correlation (0·99), creating substantial total additive genetic variance (0·18). Breeding date varied with female, but not male inbreeding coefficient, revealing direct, but not indirect, inbreeding depression. All three spatial animal models estimated small location variance in breeding date, but because relatedness and breeding location were virtually uncorrelated, modelling location variance did not alter the estimated additive genetic variances. Our results show that sex-specific additive genetic effects on breeding date can be strongly positively correlated, which would affect any predicted rates of microevolutionary change in response to sexually antagonistic or congruent selection. Further, we show that inbreeding effects on breeding date can also be sex specific and that genetic effects can exceed phenotypic variation stemming from fine-scale location-based variation within a wild population. © 2016 The Authors. Journal of Animal Ecology © 2016 British Ecological Society.
Determining the bias and variance of a deterministic finger-tracking algorithm.
Morash, Valerie S; van der Velden, Bas H M
2016-06-01
Finger tracking has the potential to expand haptic research and applications, as eye tracking has done in vision research. In research applications, it is desirable to know the bias and variance associated with a finger-tracking method. However, assessing the bias and variance of a deterministic method is not straightforward. Multiple measurements of the same finger position data will not produce different results, implying zero variance. Here, we present a method of assessing deterministic finger-tracking variance and bias through comparison to a non-deterministic measure. A proof-of-concept is presented using a video-based finger-tracking algorithm developed for the specific purpose of tracking participant fingers during a psychological research study. The algorithm uses ridge detection on videos of the participant's hand, and estimates the location of the right index fingertip. The algorithm was evaluated using data from four participants, who explored tactile maps using only their right index finger and all right-hand fingers. The algorithm identified the index fingertip in 99.78 % of one-finger video frames and 97.55 % of five-finger video frames. Although the algorithm produced slightly biased and more dispersed estimates relative to a human coder, these differences (x=0.08 cm, y=0.04 cm) and standard deviations (σ x =0.16 cm, σ y =0.21 cm) were small compared to the size of a fingertip (1.5-2.0 cm). Some example finger-tracking results are provided where corrections are made using the bias and variance estimates.
Seidel, Clemens; Lautenschläger, Christine; Dunst, Jürgen; Müller, Arndt-Christian
2012-04-20
To investigate whether different conditions of DNA structure and radiation treatment could modify heterogeneity of response. Additionally to study variance as a potential parameter of heterogeneity for radiosensitivity testing. Two-hundred leukocytes per sample of healthy donors were split into four groups. I: Intact chromatin structure; II: Nucleoids of histone-depleted DNA; III: Nucleoids of histone-depleted DNA with 90 mM DMSO as antioxidant. Response to single (I-III) and twice (IV) irradiation with 4 Gy and repair kinetics were evaluated using %Tail-DNA. Heterogeneity of DNA damage was determined by calculation of variance of DNA-damage (V) and mean variance (Mvar), mutual comparisons were done by one-way analysis of variance (ANOVA). Heterogeneity of initial DNA-damage (I, 0 min repair) increased without histones (II). Absence of histones was balanced by addition of antioxidants (III). Repair reduced heterogeneity of all samples (with and without irradiation). However double irradiation plus repair led to a higher level of heterogeneity distinguishable from single irradiation and repair in intact cells. Increase of mean DNA damage was associated with a similarly elevated variance of DNA damage (r = +0.88). Heterogeneity of DNA-damage can be modified by histone level, antioxidant concentration, repair and radiation dose and was positively correlated with DNA damage. Experimental conditions might be optimized by reducing scatter of comet assay data by repair and antioxidants, potentially allowing better discrimination of small differences. Amount of heterogeneity measured by variance might be an additional useful parameter to characterize radiosensitivity.
LeDell, Erin; Petersen, Maya; van der Laan, Mark
In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC.
Importance sampling variance reduction for the Fokker–Planck rarefied gas particle method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collyer, B.S., E-mail: benjamin.collyer@gmail.com; London Mathematical Laboratory, 14 Buckingham Street, London WC2N 6DF; Connaughton, C.
The Fokker–Planck approximation to the Boltzmann equation, solved numerically by stochastic particle schemes, is used to provide estimates for rarefied gas flows. This paper presents a variance reduction technique for a stochastic particle method that is able to greatly reduce the uncertainty of the estimated flow fields when the characteristic speed of the flow is small in comparison to the thermal velocity of the gas. The method relies on importance sampling, requiring minimal changes to the basic stochastic particle scheme. We test the importance sampling scheme on a homogeneous relaxation, planar Couette flow and a lid-driven-cavity flow, and find thatmore » our method is able to greatly reduce the noise of estimated quantities. Significantly, we find that as the characteristic speed of the flow decreases, the variance of the noisy estimators becomes independent of the characteristic speed.« less
Genetic interactions contribute less than additive effects to quantitative trait variation in yeast
Bloom, Joshua S.; Kotenko, Iulia; Sadhu, Meru J.; Treusch, Sebastian; Albert, Frank W.; Kruglyak, Leonid
2015-01-01
Genetic mapping studies of quantitative traits typically focus on detecting loci that contribute additively to trait variation. Genetic interactions are often proposed as a contributing factor to trait variation, but the relative contribution of interactions to trait variation is a subject of debate. Here we use a very large cross between two yeast strains to accurately estimate the fraction of phenotypic variance due to pairwise QTL–QTL interactions for 20 quantitative traits. We find that this fraction is 9% on average, substantially less than the contribution of additive QTL (43%). Statistically significant QTL–QTL pairs typically have small individual effect sizes, but collectively explain 40% of the pairwise interaction variance. We show that pairwise interaction variance is largely explained by pairs of loci at least one of which has a significant additive effect. These results refine our understanding of the genetic architecture of quantitative traits and help guide future mapping studies. PMID:26537231
Petersen, Maya; van der Laan, Mark
2015-01-01
In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC. PMID:26279737
[Theory, method and application of method R on estimation of (co)variance components].
Liu, Wen-Zhong
2004-07-01
Theory, method and application of Method R on estimation of (co)variance components were reviewed in order to make the method be reasonably used. Estimation requires R values,which are regressions of predicted random effects that are calculated using complete dataset on predicted random effects that are calculated using random subsets of the same data. By using multivariate iteration algorithm based on a transformation matrix,and combining with the preconditioned conjugate gradient to solve the mixed model equations, the computation efficiency of Method R is much improved. Method R is computationally inexpensive,and the sampling errors and approximate credible intervals of estimates can be obtained. Disadvantages of Method R include a larger sampling variance than other methods for the same data,and biased estimates in small datasets. As an alternative method, Method R can be used in larger datasets. It is necessary to study its theoretical properties and broaden its application range further.
Blackbourn, Luke A K; Tran, Chuong V
2014-08-01
We study inertial-range dynamics and scaling laws in unforced two-dimensional magnetohydrodynamic turbulence in the regime of moderately small and small initial magnetic-to-kinetic-energy ratio r(0), with an emphasis on the latter. The regime of small r(0) corresponds to a relatively weak field and strong magnetic stretching, whereby the turbulence is characterized by an intense conversion of kinetic into magnetic energy (dynamo action in the three-dimensional context). This conversion is an inertial-range phenomenon and, upon becoming quasisaturated, deposits the converted energy within the inertial range rather than transferring it to the small scales. As a result, the magnetic-energy spectrum E(b)(k) in the inertial range can become quite shallow and may not be adequately explained or understood in terms of conventional cascade theories. It is demonstrated by numerical simulations at high Reynolds numbers (and unity magnetic Prandtl number) that the energetics and inertial-range scaling depend strongly on r(0). In particular, for fully developed turbulence with r(0) in the range [1/4,1/4096], E(b)(k) is found to scale as k(α), where α≳-1, including α>0. The extent of such a shallow spectrum is limited, becoming broader as r(0) is decreased. The slope α increases as r(0) is decreased, appearing to tend to +1 in the limit of small r(0). This implies equipartition of magnetic energy among the Fourier modes of the inertial range and the scaling k(-1) of the magnetic potential variance, whose flux is direct rather than inverse. This behavior of the potential resembles that of a passive scalar. However, unlike a passive scalar whose variance dissipation rate slowly vanishes in the diffusionless limit, the dissipation rate of the magnetic potential variance scales linearly with the diffusivity in that limit. Meanwhile, the kinetic-energy spectrum is relatively steep, followed by a much shallower tail due to strong antidynamo excitation. This gives rise to a total-energy spectrum poorly obeying a power-law scaling.
Histogram contrast analysis and the visual segregation of IID textures.
Chubb, C; Econopouly, J; Landy, M S
1994-09-01
A new psychophysical methodology is introduced, histogram contrast analysis, that allows one to measure stimulus transformations, f, used by the visual system to draw distinctions between different image regions. The method involves the discrimination of images constructed by selecting texture micropatterns randomly and independently (across locations) on the basis of a given micropattern histogram. Different components of f are measured by use of different component functions to modulate the micropattern histogram until the resulting textures are discriminable. When no discrimination threshold can be obtained for a given modulating component function, a second titration technique may be used to measure the contribution of that component to f. The method includes several strong tests of its own assumptions. An example is given of the method applied to visual textures composed of small, uniform squares with randomly chosen gray levels. In particular, for a fixed mean gray level mu and a fixed gray-level variance sigma 2, histogram contrast analysis is used to establish that the class S of all textures composed of small squares with jointly independent, identically distributed gray levels with mean mu and variance sigma 2 is perceptually elementary in the following sense: there exists a single, real-valued function f S of gray level, such that two textures I and J in S are discriminable only if the average value of f S applied to the gray levels in I is significantly different from the average value of f S applied to the gray levels in J. Finally, histogram contrast analysis is used to obtain a seventh-order polynomial approximation of f S.
Biesbroek, J Matthijs; Weaver, Nick A; Hilal, Saima; Kuijf, Hugo J; Ikram, Mohammad Kamran; Xu, Xin; Tan, Boon Yeow; Venketasubramanian, Narayanaswamy; Postma, Albert; Biessels, Geert Jan; Chen, Christopher P L H
2016-01-01
Studies on the impact of small vessel disease (SVD) on cognition generally focus on white matter hyperintensity (WMH) volume. The extent to which WMH location relates to cognitive performance has received less attention, but is likely to be functionally important. We examined the relation between WMH location and cognition in a memory clinic cohort of patients with sporadic SVD. A total of 167 patients with SVD were recruited from memory clinics. Assumption-free region of interest-based analyses based on major white matter tracts and voxel-wise analyses were used to determine the association between WMH location and executive functioning, visuomotor speed and memory. Region of interest-based analyses showed that WMHs located particularly within the anterior thalamic radiation and forceps minor were inversely associated with both executive functioning and visuomotor speed, independent of total WMH volume. Memory was significantly associated with WMH volume in the forceps minor, independent of total WMH volume. An independent assumption-free voxel-wise analysis identified strategic voxels in these same tracts. Region of interest-based analyses showed that WMH volume within the anterior thalamic radiation explained 6.8% of variance in executive functioning, compared to 3.9% for total WMH volume; WMH volume within the forceps minor explained 4.6% of variance in visuomotor speed and 4.2% of variance in memory, compared to 1.8% and 1.3% respectively for total WMH volume. Our findings identify the anterior thalamic radiation and forceps minor as strategic white matter tracts in which WMHs are most strongly associated with cognitive impairment in memory clinic patients with SVD. WMH volumes in individual tracts explained more variance in cognition than total WMH burden, emphasizing the importance of lesion location when addressing the functional consequences of WMHs.
The Cohesive Population Genetics of Molecular Drive
Ohta, Tomoko; Dover, Gabriel A.
1984-01-01
The long-term population genetics of multigene families is influenced by several biased and unbiased mechanisms of nonreciprocal exchanges (gene conversion, unequal exchanges, transposition) between member genes, often distributed on several chromosomes. These mechanisms cause fluctuations in the copy number of variant genes in an individual and lead to a gradual replacement of an original family of n genes (A) in N number of individuals by a variant gene (a). The process for spreading a variant gene through a family and through a population is called molecular drive. Consideration of the known slow rates of nonreciprocal exchanges predicts that the population variance in the copy number of gene a per individual is small at any given generation during molecular drive. Genotypes at a given generation are expected only to range over a small section of all possible genotypes from one extreme (n number of A) to the other (n number of a). A theory is developed for estimating the size of the population variance by using the concept of identity coefficients. In particular, the variance in the course of spreading of a single mutant gene of a multigene family was investigated in detail, and the theory of identity coefficients at the state of steady decay of genetic variability proved to be useful. Monte Carlo simulations and numerical analysis based on realistic rates of exchange in families of known size reveal the correctness of the theoretical prediction and also assess the effect of bias in turnover. The population dynamics of molecular drive in gradually increasing the mean copy number of a variant gene without the generation of a large variance (population cohesion) is of significance regarding potential interactions between natural selection and molecular drive. PMID:6500260
The cohesive population genetics of molecular drive.
Ohta, T; Dover, G A
1984-10-01
The long-term population genetics of multigene families is influenced by several biased and unbiased mechanisms of nonreciprocal exchanges (gene conversion, unequal exchanges, transposition) between member genes, often distributed on several chromosomes. These mechanisms cause fluctuations in the copy number of variant genes in an individual and lead to a gradual replacement of an original family of n genes (A) in N number of individuals by a variant gene (a). The process for spreading a variant gene through a family and through a population is called molecular drive. Consideration of the known slow rates of nonreciprocal exchanges predicts that the population variance in the copy number of gene a per individual is small at any given generation during molecular drive. Genotypes at a given generation are expected only to range over a small section of all possible genotypes from one extreme (n number of A) to the other (n number of a). A theory is developed for estimating the size of the population variance by using the concept of identity coefficients. In particular, the variance in the course of spreading of a single mutant gene of a multigene family was investigated in detail, and the theory of identity coefficients at the state of steady decay of genetic variability proved to be useful. Monte Carlo simulations and numerical analysis based on realistic rates of exchange in families of known size reveal the correctness of the theoretical prediction and also assess the effect of bias in turnover. The population dynamics of molecular drive in gradually increasing the mean copy number of a variant gene without the generation of a large variance (population cohesion) is of significance regarding potential interactions between natural selection and molecular drive.
Four decades of implicit Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollaber, Allan B.
In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less
Four decades of implicit Monte Carlo
Wollaber, Allan B.
2016-02-23
In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less
Developing and Evaluating New Methods for Assessing Concurrent Environmental Exposures
Summary of purpose and scope (no longer than 200 words): One limitation to current environmental health research is the focus on single contaminant exposures. Each exposure estimated in epidemiologic models accounts for a relatively small proportion of observed variance in health...
Average properties of bidisperse bubbly flows
NASA Astrophysics Data System (ADS)
Serrano-García, J. C.; Mendez-Díaz, S.; Zenit, R.
2018-03-01
Experiments were performed in a vertical channel to study the properties of a bubbly flow composed of two distinct bubble size species. Bubbles were produced using a capillary bank with tubes with two distinct inner diameters; the flow through each capillary size was controlled such that the amount of large or small bubbles could be controlled. Using water and water-glycerin mixtures, a wide range of Reynolds and Weber number ranges were investigated. The gas volume fraction ranged between 0.5% and 6%. The measurements of the mean bubble velocity of each species and the liquid velocity variance were obtained and contrasted with the monodisperse flows with equivalent gas volume fractions. We found that the bidispersity can induce a reduction of the mean bubble velocity of the large species; for the small size species, the bubble velocity can be increased, decreased, or remain unaffected depending of the flow conditions. The liquid velocity variance of the bidisperse flows is, in general, bound by the values of the small and large monodisperse values; interestingly, in some cases, the liquid velocity fluctuations can be larger than either monodisperse case. A simple model for the liquid agitation for bidisperse flows is proposed, with good agreement with the experimental measurements.
Multi-objective Optimization of Solar Irradiance and Variance at Pertinent Inclination Angles
NASA Astrophysics Data System (ADS)
Jain, Dhanesh; Lalwani, Mahendra
2018-05-01
The performance of photovoltaic panel gets highly affected bychange in atmospheric conditions and angle of inclination. This article evaluates the optimum tilt angle and orientation angle (surface azimuth angle) for solar photovoltaic array in order to get maximum solar irradiance and to reduce variance of radiation at different sets or subsets of time periods. Non-linear regression and adaptive neural fuzzy interference system (ANFIS) methods are used for predicting the solar radiation. The results of ANFIS are more accurate in comparison to non-linear regression. These results are further used for evaluating the correlation and applied for estimating the optimum combination of tilt angle and orientation angle with the help of general algebraic modelling system and multi-objective genetic algorithm. The hourly average solar irradiation is calculated at different combinations of tilt angle and orientation angle with the help of horizontal surface radiation data of Jodhpur (Rajasthan, India). The hourly average solar irradiance is calculated for three cases: zero variance, with actual variance and with double variance at different time scenarios. It is concluded that monthly collected solar radiation produces better result as compared to bimonthly, seasonally, half-yearly and yearly collected solar radiation. The profit obtained for monthly varying angle has 4.6% more with zero variance and 3.8% more with actual variance, than the annually fixed angle.
Optimal control of LQG problem with an explicit trade-off between mean and variance
NASA Astrophysics Data System (ADS)
Qian, Fucai; Xie, Guo; Liu, Ding; Xie, Wenfang
2011-12-01
For discrete-time linear-quadratic Gaussian (LQG) control problems, a utility function on the expectation and the variance of the conventional performance index is considered. The utility function is viewed as an overall objective of the system and can perform the optimal trade-off between the mean and the variance of performance index. The nonlinear utility function is first converted into an auxiliary parameters optimisation problem about the expectation and the variance. Then an optimal closed-loop feedback controller for the nonseparable mean-variance minimisation problem is designed by nonlinear mathematical programming. Finally, simulation results are given to verify the algorithm's effectiveness obtained in this article.
NASA Astrophysics Data System (ADS)
Hilfinger, Andreas; Chen, Mark; Paulsson, Johan
2012-12-01
Studies of stochastic biological dynamics typically compare observed fluctuations to theoretically predicted variances, sometimes after separating the intrinsic randomness of the system from the enslaving influence of changing environments. But variances have been shown to discriminate surprisingly poorly between alternative mechanisms, while for other system properties no approaches exist that rigorously disentangle environmental influences from intrinsic effects. Here, we apply the theory of generalized random walks in random environments to derive exact rules for decomposing time series and higher statistics, rather than just variances. We show for which properties and for which classes of systems intrinsic fluctuations can be analyzed without accounting for extrinsic stochasticity and vice versa. We derive two independent experimental methods to measure the separate noise contributions and show how to use the additional information in temporal correlations to detect multiplicative effects in dynamical systems.
Sampling design optimisation for rainfall prediction using a non-stationary geostatistical model
NASA Astrophysics Data System (ADS)
Wadoux, Alexandre M. J.-C.; Brus, Dick J.; Rico-Ramirez, Miguel A.; Heuvelink, Gerard B. M.
2017-09-01
The accuracy of spatial predictions of rainfall by merging rain-gauge and radar data is partly determined by the sampling design of the rain-gauge network. Optimising the locations of the rain-gauges may increase the accuracy of the predictions. Existing spatial sampling design optimisation methods are based on minimisation of the spatially averaged prediction error variance under the assumption of intrinsic stationarity. Over the past years, substantial progress has been made to deal with non-stationary spatial processes in kriging. Various well-documented geostatistical models relax the assumption of stationarity in the mean, while recent studies show the importance of considering non-stationarity in the variance for environmental processes occurring in complex landscapes. We optimised the sampling locations of rain-gauges using an extension of the Kriging with External Drift (KED) model for prediction of rainfall fields. The model incorporates both non-stationarity in the mean and in the variance, which are modelled as functions of external covariates such as radar imagery, distance to radar station and radar beam blockage. Spatial predictions are made repeatedly over time, each time recalibrating the model. The space-time averaged KED variance was minimised by Spatial Simulated Annealing (SSA). The methodology was tested using a case study predicting daily rainfall in the north of England for a one-year period. Results show that (i) the proposed non-stationary variance model outperforms the stationary variance model, and (ii) a small but significant decrease of the rainfall prediction error variance is obtained with the optimised rain-gauge network. In particular, it pays off to place rain-gauges at locations where the radar imagery is inaccurate, while keeping the distribution over the study area sufficiently uniform.
Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation
2008-12-01
slight longitudinal variations, with secondary high- latitude peaks occurring over Greenland and Europe . As the QBO changes to the westerly phase, the...equatorial GW temperature variances from suborbital data (e.g., Eck- ermann et al. 1995). The extratropical wave variances are generally larger in the...emanating from tropopause altitudes, presumably radiated from tropospheric jet stream in- stabilities associated with baroclinic storm systems that
Luan, Sheng; Luo, Kun; Chai, Zhan; Cao, Baoxiang; Meng, Xianhong; Lu, Xia; Liu, Ning; Xu, Shengyu; Kong, Jie
2015-12-14
Our aim was to estimate the genetic parameters for the direct genetic effect (DGE) and indirect genetic effects (IGE) on adult body weight in the Pacific white shrimp. IGE is the heritable effect of an individual on the trait values of its group mates. To examine IGE on body weight, 4725 shrimp from 105 tagged families were tested in multiple small test groups (MSTG). Each family was separated into three groups (15 shrimp per group) that were randomly assigned to 105 concrete tanks with shrimp from two other families. To estimate breeding values, one large test group (OLTG) in a 300 m(2) circular concrete tank was used for the communal rearing of 8398 individuals from 105 families. Body weight was measured after a growth-test period of more than 200 days. Variance components for body weight in the MSTG programs were estimated using an animal model excluding or including IGE whereas variance components in the OLTG programs were estimated using a conventional animal model that included only DGE. The correlation of DGE between MSTG and OLTG programs was estimated by a two-trait animal model that included or excluded IGE. Heritability estimates for body weight from the conventional animal model in MSTG and OLTG programs were 0.26 ± 0.13 and 0.40 ± 0.06, respectively. The log likelihood ratio test revealed significant IGE on body weight. Total heritable variance was the sum of direct genetic variance (43.5%), direct-indirect genetic covariance (2.1%), and indirect genetic variance (54.4%). It represented 73% of the phenotypic variance and was more than two-fold greater than that (32%) obtained by using a classical heritability model for body weight. Correlations of DGE on body weight between MSTG and OLTG programs were intermediate regardless of whether IGE were included or not in the model. Our results suggest that social interactions contributed to a large part of the heritable variation in body weight. Small and non-significant direct-indirect genetic correlations implied that neutral or slightly cooperative heritable interactions, rather than competition, were dominant in this population but this may be due to the low rearing density.
Estimating the Magnitude and Frequency of Floods in Small Urban Streams in South Carolina, 2001
Feaster, Toby D.; Guimaraes, Wladimir B.
2004-01-01
The magnitude and frequency of floods at 20 streamflowgaging stations on small, unregulated urban streams in or near South Carolina were estimated by fitting the measured wateryear peak flows to a log-Pearson Type-III distribution. The period of record (through September 30, 2001) for the measured water-year peak flows ranged from 11 to 25 years with a mean and median length of 16 years. The drainage areas of the streamflow-gaging stations ranged from 0.18 to 41 square miles. Based on the flood-frequency estimates from the 20 streamflow-gaging stations (13 in South Carolina; 4 in North Carolina; and 3 in Georgia), generalized least-squares regression was used to develop regional regression equations. These equations can be used to estimate the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows for small urban streams in the Piedmont, upper Coastal Plain, and lower Coastal Plain physiographic provinces of South Carolina. The most significant explanatory variables from this analysis were mainchannel length, percent impervious area, and basin development factor. Mean standard errors of prediction for the regression equations ranged from -25 to 33 percent for the 10-year recurrence-interval flows and from -35 to 54 percent for the 100-year recurrence-interval flows. The U.S. Geological Survey has developed a Geographic Information System application called StreamStats that makes the process of computing streamflow statistics at ungaged sites faster and more consistent than manual methods. This application was developed in the Massachusetts District and ongoing work is being done in other districts to develop a similar application using streamflow statistics relative to those respective States. Considering the future possibility of implementing StreamStats in South Carolina, an alternative set of regional regression equations was developed using only main channel length and impervious area. This was done because no digital coverages are currently available for basin development factor and, therefore, it could not be included in the StreamStats application. The average mean standard error of prediction for the alternative equations was 2 to 5 percent larger than the standard errors for the equations that contained basin development factor. For the urban streamflow-gaging stations in South Carolina, measured water-year peak flows were compared with those from an earlier urban flood-frequency investigation. The peak flows from the earlier investigation were computed using a rainfall-runoff model. At many of the sites, graphical comparisons indicated that the variance of the measured data was much less than the variance of the simulated data. Several statistical tests were applied to compare the variances and the means of the measured and simulated data for each site. The results indicated that the variances were significantly different for 11 of the 13 South Carolina streamflow-gaging stations. For one streamflow-gaging station, the test for normality, which is one of the assumptions of the data when comparing variances, indicated that neither the measured data nor the simulated data were distributed normally; therefore, the test for differences in the variances was not used for that streamflow-gaging station. Another statistical test was used to test for statistically significant differences in the means of the measured and simulated data. The results indicated that for 5 of the 13 urban streamflowgaging stations in South Carolina there was a statistically significant difference in the means of the two data sets. For comparison purposes and to test the hypothesis that there may have been climatic differences between the period in which the measured peak-flow data were measured and the period for which historic rainfall data were used to compute the simulated peak flows, 16 rural streamflow-gaging stations with long-term records were reviewed using similar techniques as those used for the measured an
Active learning of cortical connectivity from two-photon imaging data.
Bertrán, Martín A; Martínez, Natalia L; Wang, Ye; Dunson, David; Sapiro, Guillermo; Ringach, Dario
2018-01-01
Understanding how groups of neurons interact within a network is a fundamental question in system neuroscience. Instead of passively observing the ongoing activity of a network, we can typically perturb its activity, either by external sensory stimulation or directly via techniques such as two-photon optogenetics. A natural question is how to use such perturbations to identify the connectivity of the network efficiently. Here we introduce a method to infer sparse connectivity graphs from in-vivo, two-photon imaging of population activity in response to external stimuli. A novel aspect of the work is the introduction of a recommended distribution, incrementally learned from the data, to optimally refine the inferred network. Unlike existing system identification techniques, this "active learning" method automatically focuses its attention on key undiscovered areas of the network, instead of targeting global uncertainty indicators like parameter variance. We show how active learning leads to faster inference while, at the same time, provides confidence intervals for the network parameters. We present simulations on artificial small-world networks to validate the methods and apply the method to real data. Analysis of frequency of motifs recovered show that cortical networks are consistent with a small-world topology model.
Active learning of cortical connectivity from two-photon imaging data
Wang, Ye; Dunson, David; Sapiro, Guillermo; Ringach, Dario
2018-01-01
Understanding how groups of neurons interact within a network is a fundamental question in system neuroscience. Instead of passively observing the ongoing activity of a network, we can typically perturb its activity, either by external sensory stimulation or directly via techniques such as two-photon optogenetics. A natural question is how to use such perturbations to identify the connectivity of the network efficiently. Here we introduce a method to infer sparse connectivity graphs from in-vivo, two-photon imaging of population activity in response to external stimuli. A novel aspect of the work is the introduction of a recommended distribution, incrementally learned from the data, to optimally refine the inferred network. Unlike existing system identification techniques, this “active learning” method automatically focuses its attention on key undiscovered areas of the network, instead of targeting global uncertainty indicators like parameter variance. We show how active learning leads to faster inference while, at the same time, provides confidence intervals for the network parameters. We present simulations on artificial small-world networks to validate the methods and apply the method to real data. Analysis of frequency of motifs recovered show that cortical networks are consistent with a small-world topology model. PMID:29718955
A new measure based on degree distribution that links information theory and network graph analysis
2012-01-01
Background Detailed connection maps of human and nonhuman brains are being generated with new technologies, and graph metrics have been instrumental in understanding the general organizational features of these structures. Neural networks appear to have small world properties: they have clustered regions, while maintaining integrative features such as short average pathlengths. Results We captured the structural characteristics of clustered networks with short average pathlengths through our own variable, System Difference (SD), which is computationally simple and calculable for larger graph systems. SD is a Jaccardian measure generated by averaging all of the differences in the connection patterns between any two nodes of a system. We calculated SD over large random samples of matrices and found that high SD matrices have a low average pathlength and a larger number of clustered structures. SD is a measure of degree distribution with high SD matrices maximizing entropic properties. Phi (Φ), an information theory metric that assesses a system’s capacity to integrate information, correlated well with SD - with SD explaining over 90% of the variance in systems above 11 nodes (tested for 4 to 13 nodes). However, newer versions of Φ do not correlate well with the SD metric. Conclusions The new network measure, SD, provides a link between high entropic structures and degree distributions as related to small world properties. PMID:22726594
Headwater peatland channels in south-eastern Australia; the attainment of equilibrium
NASA Astrophysics Data System (ADS)
Nanson, R. A.; Cohen, T. J.
2014-05-01
Many small headwater catchments (< 50 km2) in temperate south-eastern Australia store sediment in valley fills. While accumulation in some of these systems commenced up to 30,000 years ago, most did not commence filling with peat or clastic material until at least the mid Holocene. In such headwater settings, many clastic valley fills develop cut-and-fill channels, which contrast to some peatland settings where sinuous equilibrium channels have evolved. Four peatland systems within this dataset demonstrate stable channel systems which span nearly the full spectrum of observed valley-floor slopes. We assess new and published longitudinal data from these four channels and demonstrate that each of these channels has achieved equilibrium profiles. New and published flow and survey data are synthesised to demonstrate how these peatland systems have attained equilibrium. Low rates of sediment supply and exceptionally high bank strengths have resulted in low width to depth ratios which accommodate rapid changes in flow velocity and depth with changes in discharge. In small peatland channels, planform adjustments have been sufficient to counter the energy provided by these hydraulically efficient cross-sections and have enabled the achievement of regime energy-slopes. In larger and higher energy peatland channels, large, armoured, stable, bedforms have developed. These bedforms integrate with planform adjustments to maintain a condition of minimum variance in energy losses as represented by the slope profiles and, therefore, a uniform increase in downstream entropy.
Genomic Prediction Accounting for Residual Heteroskedasticity.
Ou, Zhining; Tempelman, Robert J; Steibel, Juan P; Ernst, Catherine W; Bates, Ronald O; Bello, Nora M
2015-11-12
Whole-genome prediction (WGP) models that use single-nucleotide polymorphism marker information to predict genetic merit of animals and plants typically assume homogeneous residual variance. However, variability is often heterogeneous across agricultural production systems and may subsequently bias WGP-based inferences. This study extends classical WGP models based on normality, heavy-tailed specifications and variable selection to explicitly account for environmentally-driven residual heteroskedasticity under a hierarchical Bayesian mixed-models framework. WGP models assuming homogeneous or heterogeneous residual variances were fitted to training data generated under simulation scenarios reflecting a gradient of increasing heteroskedasticity. Model fit was based on pseudo-Bayes factors and also on prediction accuracy of genomic breeding values computed on a validation data subset one generation removed from the simulated training dataset. Homogeneous vs. heterogeneous residual variance WGP models were also fitted to two quantitative traits, namely 45-min postmortem carcass temperature and loin muscle pH, recorded in a swine resource population dataset prescreened for high and mild residual heteroskedasticity, respectively. Fit of competing WGP models was compared using pseudo-Bayes factors. Predictive ability, defined as the correlation between predicted and observed phenotypes in validation sets of a five-fold cross-validation was also computed. Heteroskedastic error WGP models showed improved model fit and enhanced prediction accuracy compared to homoskedastic error WGP models although the magnitude of the improvement was small (less than two percentage points net gain in prediction accuracy). Nevertheless, accounting for residual heteroskedasticity did improve accuracy of selection, especially on individuals of extreme genetic merit. Copyright © 2016 Ou et al.
NASA Technical Reports Server (NTRS)
Fuelberg, H. E.; Meyer, P. J.
1984-01-01
Structure and correlation functions are used to describe atmospheric variability during the 10-11 April day of AVE-SESAME 1979 that coincided with the Red River Valley tornado outbreak. The special mesoscale rawinsonde data are employed in calculations involving temperature, geopotential height, horizontal wind speed and mixing ratio. Functional analyses are performed in both the lower and upper troposphere for the composite 24 h experiment period and at individual 3 h observation times. Results show that mesoscale features are prominent during the composite period. Fields of mixing ratio and horizontal wind speed exhibit the greatest amounts of small-scale variance, whereas temperature and geopotential height contain the least. Results for the nine individual times show that small-scale variance is greatest during the convective outbreak. The functions also are used to estimate random errors in the rawinsonde data. Finally, sensitivity analyses are presented to quantify confidence limits of the structure functions.
Engen, Steinar; Saether, Bernt-Erik
2017-01-01
In a stable environment, evolution maximizes growth rates in populations that are not density regulated and the carrying capacity in the case of density regulation. In a fluctuating environment, evolution maximizes a function of growth rate, carrying capacity and environmental variance, tending to r-selection and K-selection under large and small environmental noise, respectively. Here we analyze a model in which birth and death rates depend on density through the same function but with independent strength of density dependence. As a special case, both functions may be linear, corresponding to logistic dynamics. It is shown that evolution maximizes a function of the deterministic growth rate r 0 and the lifetime reproductive success (LRS) R 0 , both defined at small densities, as well as the environmental variance. Under large noise this function is dominated by r 0 and average lifetimes are small, whereas R 0 dominates and lifetimes are larger under small noise. Thus, K-selection is closely linked to selection for large R 0 so that evolution tends to maximize LRS in a stable environment. Consequently, different quantities (r 0 and R 0 ) tend to be maximized at low and high densities, respectively, favoring density-dependent changes in the optimal life history. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.
Denoising Medical Images using Calculus of Variations
Kohan, Mahdi Nakhaie; Behnam, Hamid
2011-01-01
We propose a method for medical image denoising using calculus of variations and local variance estimation by shaped windows. This method reduces any additive noise and preserves small patterns and edges of images. A pyramid structure-texture decomposition of images is used to separate noise and texture components based on local variance measures. The experimental results show that the proposed method has visual improvement as well as a better SNR, RMSE and PSNR than common medical image denoising methods. Experimental results in denoising a sample Magnetic Resonance image show that SNR, PSNR and RMSE have been improved by 19, 9 and 21 percents respectively. PMID:22606674
Discrimination in measures of knowledge monitoring accuracy
Was, Christopher A.
2014-01-01
Knowledge monitoring predicts academic outcomes in many contexts. However, measures of knowledge monitoring accuracy are often incomplete. In the current study, a measure of students’ ability to discriminate known from unknown information as a component of knowledge monitoring was considered. Undergraduate students’ knowledge monitoring accuracy was assessed and used to predict final exam scores in a specific course. It was found that gamma, a measure commonly used as the measure of knowledge monitoring accuracy, accounted for a small, but significant amount of variance in academic performance whereas the discrimination and bias indexes combined to account for a greater amount of variance in academic performance. PMID:25339979
Zachary, Chase E; Jiao, Yang; Torquato, Salvatore
2011-05-01
Hyperuniform many-particle distributions possess a local number variance that grows more slowly than the volume of an observation window, implying that the local density is effectively homogeneous beyond a few characteristic length scales. Previous work on maximally random strictly jammed sphere packings in three dimensions has shown that these systems are hyperuniform and possess unusual quasi-long-range pair correlations decaying as r(-4), resulting in anomalous logarithmic growth in the number variance. However, recent work on maximally random jammed sphere packings with a size distribution has suggested that such quasi-long-range correlations and hyperuniformity are not universal among jammed hard-particle systems. In this paper, we show that such systems are indeed hyperuniform with signature quasi-long-range correlations by characterizing the more general local-volume-fraction fluctuations. We argue that the regularity of the void space induced by the constraints of saturation and strict jamming overcomes the local inhomogeneity of the disk centers to induce hyperuniformity in the medium with a linear small-wave-number nonanalytic behavior in the spectral density, resulting in quasi-long-range spatial correlations scaling with r(-(d+1)) in d Euclidean space dimensions. A numerical and analytical analysis of the pore-size distribution for a binary maximally random jammed system in addition to a local characterization of the n-particle loops governing the void space surrounding the inclusions is presented in support of our argument. This paper is the first part of a series of two papers considering the relationships among hyperuniformity, jamming, and regularity of the void space in hard-particle packings.
Probability theory for 3-layer remote sensing in ideal gas law environment.
Ben-David, Avishai; Davidson, Charles E
2013-08-26
We extend the probability model for 3-layer radiative transfer [Opt. Express 20, 10004 (2012)] to ideal gas conditions where a correlation exists between transmission and temperature of each of the 3 layers. The effect on the probability density function for the at-sensor radiances is surprisingly small, and thus the added complexity of addressing the correlation can be avoided. The small overall effect is due to (a) small perturbations by the correlation on variance population parameters and (b) cancellation of perturbation terms that appear with opposite signs in the model moment expressions.
Age-specific survival of male golden-cheeked warblers on the Fort Hood Military Reservation, Texas
Duarte, Adam; Hines, James E.; Nichols, James D.; Hatfield, Jeffrey S.; Weckerly, Floyd W.
2014-01-01
Population models are essential components of large-scale conservation and management plans for the federally endangered Golden-cheeked Warbler (Setophaga chrysoparia; hereafter GCWA). However, existing models are based on vital rate estimates calculated using relatively small data sets that are now more than a decade old. We estimated more current, precise adult and juvenile apparent survival (Φ) probabilities and their associated variances for male GCWAs. In addition to providing estimates for use in population modeling, we tested hypotheses about spatial and temporal variation in Φ. We assessed whether a linear trend in Φ or a change in the overall mean Φ corresponded to an observed increase in GCWA abundance during 1992-2000 and if Φ varied among study plots. To accomplish these objectives, we analyzed long-term GCWA capture-resight data from 1992 through 2011, collected across seven study plots on the Fort Hood Military Reservation using a Cormack-Jolly-Seber model structure within program MARK. We also estimated Φ process and sampling variances using a variance-components approach. Our results did not provide evidence of site-specific variation in adult Φ on the installation. Because of a lack of data, we could not assess whether juvenile Φ varied spatially. We did not detect a strong temporal association between GCWA abundance and Φ. Mean estimates of Φ for adult and juvenile male GCWAs for all years analyzed were 0.47 with a process variance of 0.0120 and a sampling variance of 0.0113 and 0.28 with a process variance of 0.0076 and a sampling variance of 0.0149, respectively. Although juvenile Φ did not differ greatly from previous estimates, our adult Φ estimate suggests previous GCWA population models were overly optimistic with respect to adult survival. These updated Φ probabilities and their associated variances will be incorporated into new population models to assist with GCWA conservation decision making.
Prediction-error variance in Bayesian model updating: a comparative study
NASA Astrophysics Data System (ADS)
Asadollahi, Parisa; Li, Jian; Huang, Yong
2017-04-01
In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model class level produces more robust results especially when the number of measurement is small.
A de-noising method using the improved wavelet threshold function based on noise variance estimation
NASA Astrophysics Data System (ADS)
Liu, Hui; Wang, Weida; Xiang, Changle; Han, Lijin; Nie, Haizhao
2018-01-01
The precise and efficient noise variance estimation is very important for the processing of all kinds of signals while using the wavelet transform to analyze signals and extract signal features. In view of the problem that the accuracy of traditional noise variance estimation is greatly affected by the fluctuation of noise values, this study puts forward the strategy of using the two-state Gaussian mixture model to classify the high-frequency wavelet coefficients in the minimum scale, which takes both the efficiency and accuracy into account. According to the noise variance estimation, a novel improved wavelet threshold function is proposed by combining the advantages of hard and soft threshold functions, and on the basis of the noise variance estimation algorithm and the improved wavelet threshold function, the research puts forth a novel wavelet threshold de-noising method. The method is tested and validated using random signals and bench test data of an electro-mechanical transmission system. The test results indicate that the wavelet threshold de-noising method based on the noise variance estimation shows preferable performance in processing the testing signals of the electro-mechanical transmission system: it can effectively eliminate the interference of transient signals including voltage, current, and oil pressure and maintain the dynamic characteristics of the signals favorably.
Kerner, Matthew S; Kurrant, Anthony B
2003-12-01
This study was designed to test the efficacy of the theory of planned behavior in predicting intention to engage in leisure-time physical activity and leisure-time physical activity behavior of high school girls. Rating scales were used for assessing attitude to leisure-time physical activity, subjective norm, perceived control, and intention to engage in leisure-time physical activity among 129 ninth through twelfth graders. Leisure-time physical activity was obtained from 3-wk. diaries. The first hierarchical multiple regression indicated that perceived control added (R2 change = .033) to the contributions of attitude to leisure-time physical activity and subjective norm in accounting for 50.7% of the total variance of intention to engage in leisure-time physical activity. The second regression analysis indicated that almost 10% of the variance of leisure-time physical activity was explicated by intention to engage in leisure-time physical activity and perceived control, with perceived control contributing 6.4%. From both academic and theoretical standpoints, our findings support the theory of planned behavior, although quantitatively the variance of leisure-time physical activity was not well-accounted for. In addition, considering the small percentage increase in variance explained by the addition of perceived control explaining variance of intention to engage in leisure-time physical activity, the pragmatism of implementing the measure of perceived control is questionable for this population.
Gramlich, A; Tandy, S; Andres, C; Chincheros Paniagua, J; Armengot, L; Schneider, M; Schulin, R
2017-02-15
Cadmium (Cd) uptake by cocoa has recently attracted attention, after the European Union (EU) decided to establish values for tolerable Cd concentrations in cocoa products. Bean Cd concentrations from some cocoa provenances, especially from Latin America, were found to exceed these values. Cadmium uptake by cocoa is expected not only to depend on a variety of soil factors, but also on plant and management factors. In this study, we investigated the influence of different production systems on Cd uptake by cocoa in a long-term field trial in the Alto Beni Region of Bolivia, where cocoa trees are grown in monocultures and in agroforestry systems, both under organic and conventional management. Leaf, fruits and roots of two cultivars were sampled from each production system along with soil samples collected around these trees. Leaf, pod husk and bean samples were analysed for Cd, iron (Fe) and zinc (Zn), the roots for mycorrhizal abundance and the soil samples for 'total' and 'available' Cd, Fe and Zn as well as DGT-available Cd and Zn, pH, organic matter, texture, 'available' phosphorus (P) and potassium (K). Only a small part of the variance in bean and pod husk Cd was explained by management, soil and plant factors. Furthermore, the production systems and cultivars alone had no significant influence on leaf Cd. However, we found lower Cd leaf contents in agroforestry systems than in monocultures when analysed in combination with DGT-available soil Cd, cocoa cultivar and soil organic matter. Overall, this model explained 60% of the variance of the leaf Cd concentrations. We explain lower leaf Cd concentrations in agroforestry systems by competition for Cd uptake with other plants. The cultivar effect may be explained by cultivar specific uptake capacities or by a growth effect translating into different uptake rates, as the cultivars were of different size. Copyright © 2016 Elsevier B.V. All rights reserved.
Impact of Damping Uncertainty on SEA Model Response Variance
NASA Technical Reports Server (NTRS)
Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand
2010-01-01
Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.
Optimal distribution of integration time for intensity measurements in Stokes polarimetry.
Li, Xiaobo; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie; Hu, Haofeng
2015-10-19
We consider the typical Stokes polarimetry system, which performs four intensity measurements to estimate a Stokes vector. We show that if the total integration time of intensity measurements is fixed, the variance of the Stokes vector estimator depends on the distribution of the integration time at four intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the Stokes vector estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time by employing Lagrange multiplier method. According to the theoretical analysis and real-world experiment, it is shown that the total variance of the Stokes vector estimator can be significantly decreased about 40% in the case discussed in this paper. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improves the measurement accuracy of the polarimetric system.
Li, Xiaobo; Hu, Haofeng; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie
2016-04-04
We consider the degree of linear polarization (DOLP) polarimetry system, which performs two intensity measurements at orthogonal polarization states to estimate DOLP. We show that if the total integration time of intensity measurements is fixed, the variance of the DOLP estimator depends on the distribution of integration time for two intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the DOLP estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time in an approximate way by employing Delta method and Lagrange multiplier method. According to the theoretical analyses and real-world experiments, it is shown that the variance of the DOLP estimator can be decreased for any value of DOLP. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improve the measurement accuracy of the polarimetry system.
Estimation variance bounds of importance sampling simulations in digital communication systems
NASA Technical Reports Server (NTRS)
Lu, D.; Yao, K.
1991-01-01
In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.
NASA Astrophysics Data System (ADS)
Visser, Eric P.; Disselhorst, Jonathan A.; van Lier, Monique G. J. T. B.; Laverman, Peter; de Jong, Gabie M.; Oyen, Wim J. G.; Boerman, Otto C.
2011-02-01
The image reconstruction algorithms provided with the Siemens Inveon small-animal PET scanner are filtered backprojection (FBP), 3-dimensional reprojection (3DRP), ordered subset expectation maximization in 2 or 3 dimensions (OSEM2D/3D) and maximum a posteriori (MAP) reconstruction. This study aimed at optimizing the reconstruction parameter settings with regard to image quality (IQ) as defined by the NEMA NU 4-2008 standards. The NEMA NU 4-2008 image quality phantom was used to determine image noise, expressed as percentage standard deviation in the uniform phantom region (%STD unif), activity recovery coefficients for the FDG-filled rods (RC rod), and spill-over ratios for the non-radioactive water- and air-filled phantom compartments (SOR wat and SOR air). Although not required by NEMA NU 4, we also determined a contrast-to-noise ratio for each rod (CNR rod), expressing the trade-off between activity recovery and image noise. For FBP and 3DRP the cut-off frequency of the applied filters, and for OSEM2D and OSEM3D, the number of iterations was varied. For MAP, the "smoothing parameter" β and the type of uniformity constraint (variance or resolution) were varied. Results of these analyses were demonstrated in images of an FDG-injected rat showing tumours in the liver, and of a mouse injected with an 18F-labeled peptide, showing a small subcutaneous tumour and the cortex structure of the kidneys. Optimum IQ in terms of CNR rod for the small-diameter rods was obtained using MAP with uniform variance and β=0.4. This setting led to RC rod,1 mm=0.21, RC rod,2 mm=0.57, %STD unif=1.38, SOR wat=0.0011, and SOR air=0.00086. However, the highest activity recovery for the smallest rods with still very small %STD unif was obtained using β=0.075, for which these IQ parameters were 0.31, 0.74, 2.67, 0.0041, and 0.0030, respectively. The different settings of reconstruction parameters were clearly reflected in the rat and mouse images as the trade-off between the recovery of small structures (blood vessels, small tumours, kidney cortex structure) and image noise in homogeneous body parts (healthy liver background). Highest IQ for the Inveon PET scanner was obtained using MAP reconstruction with uniform variance. The setting of β depended on the specific imaging goals.
Chen, Hsing Hung; Shen, Tao; Xu, Xin-Long; Ma, Chao
2013-01-01
The characteristics of firm's expansion by differentiated products and diversified products are quite different. However, the study employing absorptive capacity to examine the impacts of different modes of expansion on performance of small solar energy firms has never been discussed before. Then, a conceptual model to analyze the tension between strategies and corporate performance is proposed to filling the vacancy. After practical investigation, the results show that stronger organizational institutions help small solar energy firms expanded by differentiated products increase consistency between strategies and corporate performance; oppositely, stronger working attitudes with weak management controls help small solar energy firms expanded by diversified products reduce variance between strategies and corporate performance.
2012-01-01
Background To investigate whether different conditions of DNA structure and radiation treatment could modify heterogeneity of response. Additionally to study variance as a potential parameter of heterogeneity for radiosensitivity testing. Methods Two-hundred leukocytes per sample of healthy donors were split into four groups. I: Intact chromatin structure; II: Nucleoids of histone-depleted DNA; III: Nucleoids of histone-depleted DNA with 90 mM DMSO as antioxidant. Response to single (I-III) and twice (IV) irradiation with 4 Gy and repair kinetics were evaluated using %Tail-DNA. Heterogeneity of DNA damage was determined by calculation of variance of DNA-damage (V) and mean variance (Mvar), mutual comparisons were done by one-way analysis of variance (ANOVA). Results Heterogeneity of initial DNA-damage (I, 0 min repair) increased without histones (II). Absence of histones was balanced by addition of antioxidants (III). Repair reduced heterogeneity of all samples (with and without irradiation). However double irradiation plus repair led to a higher level of heterogeneity distinguishable from single irradiation and repair in intact cells. Increase of mean DNA damage was associated with a similarly elevated variance of DNA damage (r = +0.88). Conclusions Heterogeneity of DNA-damage can be modified by histone level, antioxidant concentration, repair and radiation dose and was positively correlated with DNA damage. Experimental conditions might be optimized by reducing scatter of comet assay data by repair and antioxidants, potentially allowing better discrimination of small differences. Amount of heterogeneity measured by variance might be an additional useful parameter to characterize radiosensitivity. PMID:22520045
Tang, Yongqiang
2017-12-01
Control-based pattern mixture models (PMM) and delta-adjusted PMMs are commonly used as sensitivity analyses in clinical trials with non-ignorable dropout. These PMMs assume that the statistical behavior of outcomes varies by pattern in the experimental arm in the imputation procedure, but the imputed data are typically analyzed by a standard method such as the primary analysis model. In the multiple imputation (MI) inference, Rubin's variance estimator is generally biased when the imputation and analysis models are uncongenial. One objective of the article is to quantify the bias of Rubin's variance estimator in the control-based and delta-adjusted PMMs for longitudinal continuous outcomes. These PMMs assume the same observed data distribution as the mixed effects model for repeated measures (MMRM). We derive analytic expressions for the MI treatment effect estimator and the associated Rubin's variance in these PMMs and MMRM as functions of the maximum likelihood estimator from the MMRM analysis and the observed proportion of subjects in each dropout pattern when the number of imputations is infinite. The asymptotic bias is generally small or negligible in the delta-adjusted PMM, but can be sizable in the control-based PMM. This indicates that the inference based on Rubin's rule is approximately valid in the delta-adjusted PMM. A simple variance estimator is proposed to ensure asymptotically valid MI inferences in these PMMs, and compared with the bootstrap variance. The proposed method is illustrated by the analysis of an antidepressant trial, and its performance is further evaluated via a simulation study. © 2017, The International Biometric Society.
Adaptive cyclic physiologic noise modeling and correction in functional MRI.
Beall, Erik B
2010-03-30
Physiologic noise in BOLD-weighted MRI data is known to be a significant source of the variance, reducing the statistical power and specificity in fMRI and functional connectivity analyses. We show a dramatic improvement on current noise correction methods in both fMRI and fcMRI data that avoids overfitting. The traditional noise model is a Fourier series expansion superimposed on the periodicity of parallel measured breathing and cardiac cycles. Correction using this model results in removal of variance matching the periodicity of the physiologic cycles. Using this framework allows easy modeling of noise. However, using a large number of regressors comes at the cost of removing variance unrelated to physiologic noise, such as variance due to the signal of functional interest (overfitting the data). It is our hypothesis that there are a small variety of fits that describe all of the significantly coupled physiologic noise. If this is true, we can replace a large number of regressors used in the model with a smaller number of the fitted regressors and thereby account for the noise sources with a smaller reduction in variance of interest. We describe these extensions and demonstrate that we can preserve variance in the data unrelated to physiologic noise while removing physiologic noise equivalently, resulting in data with a higher effective SNR than with current corrections techniques. Our results demonstrate a significant improvement in the sensitivity of fMRI (up to a 17% increase in activation volume for fMRI compared with higher order traditional noise correction) and functional connectivity analyses. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Plastic strain is a mixture of avalanches and quasireversible deformations: Study of various sizes
NASA Astrophysics Data System (ADS)
Szabó, Péter; Ispánovity, Péter Dusán; Groma, István
2015-02-01
The size dependence of plastic flow is studied by discrete dislocation dynamical simulations of systems with various amounts of interacting dislocations while the stress is slowly increased. The regions between avalanches in the individual stress curves as functions of the plastic strain were found to be nearly linear and reversible where the plastic deformation obeys an effective equation of motion with a nearly linear force. For small plastic deformation, the mean values of the stress-strain curves obey a power law over two decades. Here and for somewhat larger plastic deformations, the mean stress-strain curves converge for larger sizes, while their variances shrink, both indicating the existence of a thermodynamical limit. The converging averages decrease with increasing size, in accordance with size effects from experiments. For large plastic deformations, where steady flow sets in, the thermodynamical limit was not realized in this model system.
A comparison of coronal and interplanetary current sheet inclinations
NASA Technical Reports Server (NTRS)
Behannon, K. W.; Burlaga, L. F.; Hundhausen, A. J.
1983-01-01
The HAO white light K-coronameter observations show that the inclination of the heliospheric current sheet at the base of the corona can be both large (nearly vertical with respect to the solar equator) or small during Cararington rotations 1660 - 1666 and even on a single solar rotation. Voyager 1 and 2 magnetic field observations of crossing of the heliospheric current sheet at distances from the Sun of 1.4 and 2.8 AU. Two cases are considered, one in which the corresponding coronameter data indicate a nearly vertical (north-south) current sheet and another in which a nearly horizontal, near equatorial current sheet is indicated. For the crossings of the vertical current sheet, a variance analysis based on hour averages of the magnetic field data gave a minimum variance direction consistent with a steep inclination. The horizontal current sheet was observed by Voyager as a region of mixed polarity and low speeds lasting several days, consistent with multiple crossings of a horizontal but irregular and fluctuating current sheet at 1.4 AU. However, variance analysis of individual current sheet crossings in this interval using 1.92 see averages did not give minimum variance directions consistent with a horizontal current sheet.
POSTERIOR PREDICTIVE MODEL CHECKS FOR DISEASE MAPPING MODELS. (R827257)
Disease incidence or disease mortality rates for small areas are often displayed on maps. Maps of raw rates, disease counts divided by the total population at risk, have been criticized as unreliable due to non-constant variance associated with heterogeneity in base population si...
Mixed emotions: Sensitivity to facial variance in a crowd of faces.
Haberman, Jason; Lee, Pegan; Whitney, David
2015-01-01
The visual system automatically represents summary information from crowds of faces, such as the average expression. This is a useful heuristic insofar as it provides critical information about the state of the world, not simply information about the state of one individual. However, the average alone is not sufficient for making decisions about how to respond to a crowd. The variance or heterogeneity of the crowd--the mixture of emotions--conveys information about the reliability of the average, essential for determining whether the average can be trusted. Despite its importance, the representation of variance within a crowd of faces has yet to be examined. This is addressed here in three experiments. In the first experiment, observers viewed a sample set of faces that varied in emotion, and then adjusted a subsequent set to match the variance of the sample set. To isolate variance as the summary statistic of interest, the average emotion of both sets was random. Results suggested that observers had information regarding crowd variance. The second experiment verified that this was indeed a uniquely high-level phenomenon, as observers were unable to derive the variance of an inverted set of faces as precisely as an upright set of faces. The third experiment replicated and extended the first two experiments using method-of-constant-stimuli. Together, these results show that the visual system is sensitive to emergent information about the emotional heterogeneity, or ambivalence, in crowds of faces.
NASA Astrophysics Data System (ADS)
Poddar, Raju; Zawadzki, Robert J.; Cortés, Dennis E.; Mannis, Mark J.; Werner, John S.
2015-06-01
We present in vivo volumetric depth-resolved vasculature images of the anterior segment of the human eye acquired with phase-variance based motion contrast using a high-speed (100 kHz, 105 A-scans/s) swept source optical coherence tomography system (SSOCT). High phase stability SSOCT imaging was achieved by using a computationally efficient phase stabilization approach. The human corneo-scleral junction and sclera were imaged with swept source phase-variance optical coherence angiography and compared with slit lamp images from the same eyes of normal subjects. Different features of the rich vascular system in the conjunctiva and episclera were visualized and described. This system can be used as a potential tool for ophthalmological research to determine changes in the outflow system, which may be helpful for identification of abnormalities that lead to glaucoma.
Variance decomposition in stochastic simulators.
Le Maître, O P; Knio, O M; Moraes, A
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
System level analysis and control of manufacturing process variation
Hamada, Michael S.; Martz, Harry F.; Eleswarpu, Jay K.; Preissler, Michael J.
2005-05-31
A computer-implemented method is implemented for determining the variability of a manufacturing system having a plurality of subsystems. Each subsystem of the plurality of subsystems is characterized by signal factors, noise factors, control factors, and an output response, all having mean and variance values. Response models are then fitted to each subsystem to determine unknown coefficients for use in the response models that characterize the relationship between the signal factors, noise factors, control factors, and the corresponding output response having mean and variance values that are related to the signal factors, noise factors, and control factors. The response models for each subsystem are coupled to model the output of the manufacturing system as a whole. The coefficients of the fitted response models are randomly varied to propagate variances through the plurality of subsystems and values of signal factors and control factors are found to optimize the output of the manufacturing system to meet a specified criterion.
Variance decomposition in stochastic simulators
NASA Astrophysics Data System (ADS)
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Analysis of a spatial tracking subsystem for optical communications
NASA Technical Reports Server (NTRS)
Win, Moe Z.; Chen, CHIEN-C.
1992-01-01
Spatial tracking plays a very critical role in designing optical communication systems because of the small angular beamwidth associated with the optical signal. One possible solution for spatial tracking is to use a nutating mirror which dithers the incoming beam at a rate much higher than the mechanical disturbances. A power detector then senses the change in detected power as the signal is reflected off the nutating mirror. This signal is then correlated with the nutator driver signals to obtain estimates of the azimuth and elevation tracking signals to control the fast scanning mirrors. A theoretical analysis is performed for a spatial tracking system using a nutator disturbed by shot noise and mechanical vibrations. Contributions of shot noise and mechanical vibrations to the total tracking error variance are derived. Given the vibration spectrum and the expected signal power, there exists an optimal amplitude for the nutation which optimizes the receiver performance. The expected performance of a nutator based system is estimated based on the choice of nutation amplitude.
Simulation of Autonomic Logistics System (ALS) Sortie Generation
2003-03-01
84 Appendix B. ANOVA Assumptions Mission Capable Rate ANOVA Assumptions Constant Variance SSR # X cols SSE n Breusch - Pagan Chi-square 3.57E...85 Flying Scheduling Effectiveness ANOVA Assumptions Constant Variance SSR # X cols SSE n Breusch - Pagan Chi-square 2.12E-10 3 0.000816 270...Constant Variance SSR # X cols SSE n Breusch - Pagan Chi-square 1.86E-09 3 0.003758 270 3.20308814 0.9556957 Independence Durbin-Watson
Predicting Cost and Schedule Growth for Military and Civil Space Systems
2008-03-01
the Shapiro-Wilk Test , and testing the residuals for constant variance using the Breusch - Pagan test . For logistic models, diagnostics include...the Breusch - Pagan Test . With this test , a p-value below 0.05 rejects the null hypothesis that the residuals have constant variance. Thus, similar...to the Shapiro- Wilk Test , because the optimal model will have constant variance of its residuals, this requires Breusch - Pagan p-values over 0.05
The Impact of Economic Factors and Acquisition Reforms on the Cost of Defense Weapon Systems
2006-03-01
test for homoskedasticity, the Breusch - Pagan test is employed. The null hypothesis of the Breusch - Pagan test is that the variance is equal to zero...made. Using the Breusch - Pagan test shown in Table 19 below, the prob>chi2 is greater than 05.=α , therefore we fail to reject the null hypothesis...overrunpercentfp100 Breusch - Pagan Test (Ho=Constant Variance) Estimated Results Variance Standard Deviation overrunpercent100
The reality and importance of founder speciation in evolution.
Templeton, Alan R
2008-05-01
A founder event occurs when a new population is established from a small number of individuals drawn from a large ancestral population. Mayr proposed that genetic drift in an isolated founder population could alter the selective forces in an epistatic system, an observation supported by recent studies. Carson argued that a period of relaxed selection could occur when a founder population is in an open ecological niche, allowing rapid population growth after the founder event. Selectable genetic variation can actually increase during this founder-flush phase due to recombination, enhanced survival of advantageous mutations, and the conversion of non-additive genetic variance into additive variance in an epistatic system, another empirically confirmed prediction. Templeton combined the theories of Mayr and Carson with population genetic models to predict the conditions under which founder events can contribute to speciation, and these predictions are strongly confirmed by the empirical literature. Much of the criticism of founder speciation is based upon equating founder speciation to an adaptive peak shift opposed by selection. However, Mayr, Carson and Templeton all modeled a positive interaction of selection and drift, and Templeton showed that founder speciation is incompatible with peak-shift conditions. Although rare, founder speciation can have a disproportionate importance in adaptive innovation and radiation, and examples are given to show that "rare" does not mean "unimportant" in evolution. Founder speciation also interacts with other speciation mechanisms such that a speciation event is not a one-dimensional process due to either selection alone or drift alone. (c) 2008 Wiley Periodicals, Inc.
Wood, Jacquelyn L A; Yates, Matthew C; Fraser, Dylan J
2016-06-01
It is widely thought that small populations should have less additive genetic variance and respond less efficiently to natural selection than large populations. Across taxa, we meta-analytically quantified the relationship between adult census population size (N) and additive genetic variance (proxy: h (2)) and found no reduction in h (2) with decreasing N; surveyed populations ranged from four to one million individuals (1735 h (2) estimates, 146 populations, 83 species). In terms of adaptation, ecological conditions may systematically differ between populations of varying N; the magnitude of selection these populations experience may therefore also differ. We thus also meta-analytically tested whether selection changes with N and found little evidence for systematic differences in the strength, direction or form of selection with N across different trait types and taxa (7344 selection estimates, 172 populations, 80 species). Collectively, our results (i) indirectly suggest that genetic drift neither overwhelms selection more in small than in large natural populations, nor weakens adaptive potential/h (2) in small populations, and (ii) imply that natural populations of varying sizes experience a variety of environmental conditions, without consistently differing habitat quality at small N. However, we caution that the data are currently insufficient to determine whether some small populations may retain adaptive potential definitively. Further study is required into (i) selection and genetic variation in completely isolated populations of known N, under-represented taxonomic groups, and nongeneralist species, (ii) adaptive potential using multidimensional approaches and (iii) the nature of selective pressures for specific traits.
In Vivo measurement of human body composition. [during continuous bed rest
NASA Technical Reports Server (NTRS)
Pace, N.; Grunbaum, B. W.; Kodama, A. M.; Price, D. C.
1975-01-01
Physiological changes in human beings were studied during a 21 day bed rest regime. Results of blood analyses indicated clearly that major metabolic adjustments occurred during prolonged bed rest. However, urinary metabolic analyses showed variances attributed to specimen collection inaccuracies and the small number of test subjects.
Eighty small isolated wetlands throughout Florida were sampled in 2005 to explore within-site variability of water chemistry parameters and relate water chemistry to macroinvertebrate and diatom community structure. Three samples or measures of water were collected within each si...
Field scale lysimeters to assess nutrient management impacts on runoff
USDA-ARS?s Scientific Manuscript database
Most empirical studies on the impact of field management on runoff water quality rely on edge-of-field monitoring, which is generally unreplicated and prone to high variances, or small plots, which constrain the use of conventional farm equipment and can hinder insight into landscape processes drivi...
75 FR 6364 - Process for Requesting a Variance From Vegetation Standards for Levees and Floodwalls
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-09
..., channels, or shore- line or river-bank protection systems such as revetments, sand dunes, and barrier...) toe (subject to preexisting right-of-way). f. The vegetation variance process is not a mechanism to...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nordström, Jan, E-mail: jan.nordstrom@liu.se; Wahlsten, Markus, E-mail: markus.wahlsten@liu.se
We consider a hyperbolic system with uncertainty in the boundary and initial data. Our aim is to show that different boundary conditions give different convergence rates of the variance of the solution. This means that we can with the same knowledge of data get a more or less accurate description of the uncertainty in the solution. A variety of boundary conditions are compared and both analytical and numerical estimates of the variance of the solution are presented. As an application, we study the effect of this technique on Maxwell's equations as well as on a subsonic outflow boundary for themore » Euler equations.« less
Two Independent Contributions to Step Variability during Over-Ground Human Walking
Collins, Steven H.; Kuo, Arthur D.
2013-01-01
Human walking exhibits small variations in both step length and step width, some of which may be related to active balance control. Lateral balance is thought to require integrative sensorimotor control through adjustment of step width rather than length, contributing to greater variability in step width. Here we propose that step length variations are largely explained by the typical human preference for step length to increase with walking speed, which itself normally exhibits some slow and spontaneous fluctuation. In contrast, step width variations should have little relation to speed if they are produced more for lateral balance. As a test, we examined hundreds of overground walking steps by healthy young adults (N = 14, age < 40 yrs.). We found that slow fluctuations in self-selected walking speed (2.3% coefficient of variation) could explain most of the variance in step length (59%, P < 0.01). The residual variability not explained by speed was small (1.5% coefficient of variation), suggesting that step length is actually quite precise if not for the slow speed fluctuations. Step width varied over faster time scales and was independent of speed fluctuations, with variance 4.3 times greater than that for step length (P < 0.01) after accounting for the speed effect. That difference was further magnified by walking with eyes closed, which appears detrimental to control of lateral balance. Humans appear to modulate fore-aft foot placement in precise accordance with slow fluctuations in walking speed, whereas the variability of lateral foot placement appears more closely related to balance. Step variability is separable in both direction and time scale into balance- and speed-related components. The separation of factors not related to balance may reveal which aspects of walking are most critical for the nervous system to control. PMID:24015308
Cho, Eunsoo; Compton, Donald L.; Fuchs, Doug; Fuchs, Lynn S.; Bouton, Bobette
2013-01-01
The purpose of this study was to examine the role of a dynamic assessment (DA) of decoding in predicting responsiveness to Tier 2 small group tutoring in a response-to-intervention model. First-grade students (n=134) who did not show adequate progress in Tier 1 based on 6 weeks of progress monitoring received Tier 2 small-group tutoring in reading for 14 weeks. Student responsiveness to Tier 2 was assessed weekly with word identification fluency (WIF). A series of conditional individual growth curve analyses were completed that modeled the correlates of WIF growth (final level of performance and growth). Its purpose was to examine the predictive validity of DA in the presence of 3 sets of variables: static decoding measures, Tier 1 responsiveness indicators, and pre-reading variables (phonemic awareness, rapid letter naming, oral vocabulary, and IQ). DA was a significant predictor of final level and growth, uniquely explaining 3% – 13% of the variance in Tier 2 responsiveness depending on the competing predictors in the model and WIF outcome (final level of performance or growth). Although the additional variances explained uniquely by DA were relatively small, results indicate the potential of DA in identifying Tier 2 nonresponders. PMID:23213050
Cho, Eunsoo; Compton, Donald L; Fuchs, Douglas; Fuchs, Lynn S; Bouton, Bobette
2014-01-01
The purpose of this study was to examine the role of a dynamic assessment (DA) of decoding in predicting responsiveness to Tier 2 small-group tutoring in a response-to-intervention model. First grade students (n = 134) who did not show adequate progress in Tier 1 based on 6 weeks of progress monitoring received Tier 2 small-group tutoring in reading for 14 weeks. Student responsiveness to Tier 2 was assessed weekly with word identification fluency (WIF). A series of conditional individual growth curve analyses were completed that modeled the correlates of WIF growth (final level of performance and growth). Its purpose was to examine the predictive validity of DA in the presence of three sets of variables: static decoding measures, Tier 1 responsiveness indicators, and prereading variables (phonemic awareness, rapid letter naming, oral vocabulary, and IQ). DA was a significant predictor of final level and growth, uniquely explaining 3% to 13% of the variance in Tier 2 responsiveness depending on the competing predictors in the model and WIF outcome (final level of performance or growth). Although the additional variances explained uniquely by DA were relatively small, results indicate the potential of DA in identifying Tier 2 nonresponders. © Hammill Institute on Disabilities 2012.
NASA Technical Reports Server (NTRS)
Menard, Richard; Chang, Lang-Ping
1998-01-01
A Kalman filter system designed for the assimilation of limb-sounding observations of stratospheric chemical tracers, which has four tunable covariance parameters, was developed in Part I (Menard et al. 1998) The assimilation results of CH4 observations from the Cryogenic Limb Array Etalon Sounder instrument (CLAES) and the Halogen Observation Experiment instrument (HALOE) on board of the Upper Atmosphere Research Satellite are described in this paper. A robust (chi)(sup 2) criterion, which provides a statistical validation of the forecast and observational error covariances, was used to estimate the tunable variance parameters of the system. In particular, an estimate of the model error variance was obtained. The effect of model error on the forecast error variance became critical after only three days of assimilation of CLAES observations, although it took 14 days of forecast to double the initial error variance. We further found that the model error due to numerical discretization as arising in the standard Kalman filter algorithm, is comparable in size to the physical model error due to wind and transport modeling errors together. Separate assimilations of CLAES and HALOE observations were compared to validate the state estimate away from the observed locations. A wave-breaking event that took place several thousands of kilometers away from the HALOE observation locations was well captured by the Kalman filter due to highly anisotropic forecast error correlations. The forecast error correlation in the assimilation of the CLAES observations was found to have a structure similar to that in pure forecast mode except for smaller length scales. Finally, we have conducted an analysis of the variance and correlation dynamics to determine their relative importance in chemical tracer assimilation problems. Results show that the optimality of a tracer assimilation system depends, for the most part, on having flow-dependent error correlation rather than on evolving the error variance.
Estimating stochastic noise using in situ measurements from a linear wavefront slope sensor.
Bharmal, Nazim Ali; Reeves, Andrew P
2016-01-15
It is shown how the solenoidal component of noise from the measurements of a wavefront slope sensor can be utilized to estimate the total noise: specifically, the ensemble noise variance. It is well known that solenoidal noise is orthogonal to the reconstruction of the wavefront under conditions of low scintillation (absence of wavefront vortices). Therefore, it can be retrieved even with a nonzero slope signal present. By explicitly estimating the solenoidal noise from an ensemble of slopes, it can be retrieved for any wavefront sensor configuration. Furthermore, the ensemble variance is demonstrated to be related to the total noise variance via a straightforward relationship. This relationship is revealed via the method of the explicit estimation: it consists of a small, heuristic set of four constants that do not depend on the underlying statistics of the incoming wavefront. These constants seem to apply to all situations-data from a laboratory experiment as well as many configurations of numerical simulation-so the method is concluded to be generic.
Fish play Minority Game as humans do
NASA Astrophysics Data System (ADS)
Liu, Ruey-Tarng; Chung, Fei Fang; Liaw, Sy-Sang
2012-01-01
We report the results of an unprecedented real Minority Game (MG) played by university staff members who clicked one of two identical buttons (A and B) on a computer screen while clocking in or out of work. We recorded the number of people who clicked button A for 1288 games, beginning on April 21, 2008 and ending on October 31, 2010, and calculated the variance among the people who clicked A as a function of time. The evolution of the variance shows that the global gain of selfish agents increases when a small portion of agents make persistent choice in the games. We also carried out another experiment in which we forced 101 fish to enter one of the two symmetric chambers (A and B). We repeated the fish experiment 500 times and found that the variance of the number of fish that entered chamber A evolved in a way similar to the human MG, suggesting that fish have memory and can employ more strategies when facing the same situation again and again.
False alarms: How early warning signals falsely predict abrupt sea ice loss
NASA Astrophysics Data System (ADS)
Wagner, Till J. W.; Eisenman, Ian
2016-04-01
Uncovering universal early warning signals for critical transitions has become a coveted goal in diverse scientific disciplines, ranging from climate science to financial mathematics. There has been a flurry of recent research proposing such signals, with increasing autocorrelation and increasing variance being among the most widely discussed candidates. A number of studies have suggested that increasing autocorrelation alone may suffice to signal an impending transition, although some others have questioned this. Here we consider variance and autocorrelation in the context of sea ice loss in an idealized model of the global climate system. The model features no bifurcation, nor increased rate of retreat, as the ice disappears. Nonetheless, the autocorrelation of summer sea ice area is found to increase in a global warming scenario. The variance, by contrast, decreases. A simple physical mechanism is proposed to explain the occurrence of increasing autocorrelation but not variance when there is no approaching bifurcation. Additionally, a similar mechanism is shown to allow an increase in both indicators with no physically attainable bifurcation. This implies that relying on autocorrelation and variance as early warning signals can raise false alarms in the climate system, warning of "tipping points" that are not actually there.
20 CFR 658.601 - State agency responsibility.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Cost Accounting Reports shall be compared to planned levels. Variances between achievement and plan... district office, a report describing local office performance within the area or district jurisdiction... System (ESARS) tables and Cost Accounting Reports shall be compared to planned levels. Variances between...
20 CFR 658.601 - State agency responsibility.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Cost Accounting Reports shall be compared to planned levels. Variances between achievement and plan... district office, a report describing local office performance within the area or district jurisdiction... System (ESARS) tables and Cost Accounting Reports shall be compared to planned levels. Variances between...
20 CFR 658.601 - State agency responsibility.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Cost Accounting Reports shall be compared to planned levels. Variances between achievement and plan... district office, a report describing local office performance within the area or district jurisdiction... System (ESARS) tables and Cost Accounting Reports shall be compared to planned levels. Variances between...
2012-04-30
tool that provides a means of balancing capability development against cost and interdependent risks through the use of modern portfolio theory ...Focardi, 2007; Tutuncu & Cornuejols, 2007) that are extensions of modern portfolio and control theory . The reformulation allows for possible changes...Acquisition: Wave Model context • An Investment Portfolio Approach – Mean Variance Approach – Mean - Variance : A Robust Version • Concept
The statistics of Pearce element diagrams and the Chayes closure problem
NASA Astrophysics Data System (ADS)
Nicholls, J.
1988-05-01
Pearce element ratios are defined as having a constituent in their denominator that is conserved in a system undergoing change. The presence of a conserved element in the denominator simplifies the statistics of such ratios and renders them subject to statistical tests, especially tests of significance of the correlation coefficient between Pearce element ratios. Pearce element ratio diagrams provide unambigous tests of petrologic hypotheses because they are based on the stoichiometry of rock-forming minerals. There are three ways to recognize a conserved element: 1. The petrologic behavior of the element can be used to select conserved ones. They are usually the incompatible elements. 2. The ratio of two conserved elements will be constant in a comagmatic suite. 3. An element ratio diagram that is not constructed with a conserved element in the denominator will have a trend with a near zero intercept. The last two criteria can be tested statistically. The significance of the slope, intercept and correlation coefficient can be tested by estimating the probability of obtaining the observed values from a random population of arrays. This population of arrays must satisfy two criteria: 1. The population must contain at least one array that has the means and variances of the array of analytical data for the rock suite. 2. Arrays with the means and variances of the data must not be so abundant in the population that nearly every array selected at random has the properties of the data. The population of random closed arrays can be obtained from a population of open arrays whose elements are randomly selected from probability distributions. The means and variances of these probability distributions are themselves selected from probability distributions which have means and variances equal to a hypothetical open array that would give the means and variances of the data on closure. This hypothetical open array is called the Chayes array. Alternatively, the population of random closed arrays can be drawn from the compositional space available to rock-forming processes. The minerals comprising the available space can be described with one additive component per mineral phase and a small number of exchange components. This space is called Thompson space. Statistics based on either space lead to the conclusion that Pearce element ratios are statistically valid and that Pearce element diagrams depict the processes that create chemical inhomogeneities in igneous rock suites.
Chen, Hsing Hung; Shen, Tao; Xu, Xin-long; Ma, Chao
2013-01-01
The characteristics of firm's expansion by differentiated products and diversified products are quite different. However, the study employing absorptive capacity to examine the impacts of different modes of expansion on performance of small solar energy firms has never been discussed before. Then, a conceptual model to analyze the tension between strategies and corporate performance is proposed to filling the vacancy. After practical investigation, the results show that stronger organizational institutions help small solar energy firms expanded by differentiated products increase consistency between strategies and corporate performance; oppositely, stronger working attitudes with weak management controls help small solar energy firms expanded by diversified products reduce variance between strategies and corporate performance. PMID:24453837
Smelter, Andrey; Rouchka, Eric C; Moseley, Hunter N B
2017-08-01
Peak lists derived from nuclear magnetic resonance (NMR) spectra are commonly used as input data for a variety of computer assisted and automated analyses. These include automated protein resonance assignment and protein structure calculation software tools. Prior to these analyses, peak lists must be aligned to each other and sets of related peaks must be grouped based on common chemical shift dimensions. Even when programs can perform peak grouping, they require the user to provide uniform match tolerances or use default values. However, peak grouping is further complicated by multiple sources of variance in peak position limiting the effectiveness of grouping methods that utilize uniform match tolerances. In addition, no method currently exists for deriving peak positional variances from single peak lists for grouping peaks into spin systems, i.e. spin system grouping within a single peak list. Therefore, we developed a complementary pair of peak list registration analysis and spin system grouping algorithms designed to overcome these limitations. We have implemented these algorithms into an approach that can identify multiple dimension-specific positional variances that exist in a single peak list and group peaks from a single peak list into spin systems. The resulting software tools generate a variety of useful statistics on both a single peak list and pairwise peak list alignment, especially for quality assessment of peak list datasets. We used a range of low and high quality experimental solution NMR and solid-state NMR peak lists to assess performance of our registration analysis and grouping algorithms. Analyses show that an algorithm using a single iteration and uniform match tolerances approach is only able to recover from 50 to 80% of the spin systems due to the presence of multiple sources of variance. Our algorithm recovers additional spin systems by reevaluating match tolerances in multiple iterations. To facilitate evaluation of the algorithms, we developed a peak list simulator within our nmrstarlib package that generates user-defined assigned peak lists from a given BMRB entry or database of entries. In addition, over 100,000 simulated peak lists with one or two sources of variance were generated to evaluate the performance and robustness of these new registration analysis and peak grouping algorithms.
Recovering Wood and McCarthy's ERP-prototypes by means of ERP-specific procrustes-rotation.
Beauducel, André
2018-02-01
The misallocation of treatment-variance on the wrong component has been discussed in the context of temporal principal component analysis of event-related potentials. There is, until now, no rotation-method that can perfectly recover Wood and McCarthy's prototypes without making use of additional information on treatment-effects. In order to close this gap, two new methods: for component rotation were proposed. After Varimax-prerotation, the first method identifies very small slopes of successive loadings. The corresponding loadings are set to zero in a target-matrix for event-related orthogonal partial Procrustes- (EPP-) rotation. The second method generates Gaussian normal distributions around the peaks of the Varimax-loadings and performs orthogonal Procrustes-rotation towards these Gaussian distributions. Oblique versions of this Gaussian event-related Procrustes- (GEP) rotation and of EPP-rotation are based on Promax-rotation. A simulation study revealed that the new orthogonal rotations recover Wood and McCarthy's prototypes and eliminate misallocation of treatment-variance. In an additional simulation study with a more pronounced overlap of the prototypes GEP Promax-rotation reduced the variance misallocation slightly more than EPP Promax-rotation. Comparison with Existing Method(s): Varimax- and conventional Promax-rotations resulted in substantial misallocations of variance in simulation studies when components had temporal overlap. A substantially reduced misallocation of variance occurred with the EPP-, EPP Promax-, GEP-, and GEP Promax-rotations. Misallocation of variance can be minimized by means of the new rotation methods: Making use of information on the temporal order of the loadings may allow for improvements of the rotation of temporal PCA components. Copyright © 2017 Elsevier B.V. All rights reserved.
Genetic and environmental transmission of body mass index fluctuation.
Bergin, Jocilyn E; Neale, Michael C; Eaves, Lindon J; Martin, Nicholas G; Heath, Andrew C; Maes, Hermine H
2012-11-01
This study sought to determine the relationship between body mass index (BMI) fluctuation and cardiovascular disease phenotypes, diabetes, and depression and the role of genetic and environmental factors in individual differences in BMI fluctuation using the extended twin-family model (ETFM). This study included 14,763 twins and their relatives. Health and Lifestyle Questionnaires were obtained from 28,492 individuals from the Virginia 30,000 dataset including twins, parents, siblings, spouses, and children of twins. Self-report cardiovascular disease, diabetes, and depression data were available. From self-reported height and weight, BMI fluctuation was calculated as the difference between highest and lowest BMI after age 18, for individuals 18-80 years. Logistic regression analyses were used to determine the relationship between BMI fluctuation and disease status. The ETFM was used to estimate the significance and contribution of genetic and environmental factors, cultural transmission, and assortative mating components to BMI fluctuation, while controlling for age. We tested sex differences in additive and dominant genetic effects, parental, non-parental, twin, and unique environmental effects. BMI fluctuation was highly associated with disease status, independent of BMI. Genetic effects accounted for ~34 % of variance in BMI fluctuation in males and ~43 % of variance in females. The majority of the variance was accounted for by environmental factors, about a third of which were shared among twins. Assortative mating, and cultural transmission accounted for only a small proportion of variance in this phenotype. Since there are substantial health risks associated with BMI fluctuation and environmental components of BMI fluctuation account for over 60 % of variance in males and over 50 % of variance in females, environmental risk factors may be appropriate targets to reduce BMI fluctuation.
NASA Astrophysics Data System (ADS)
Kitterød, Nils-Otto
2017-08-01
Unconsolidated sediment cover thickness (D) above bedrock was estimated by using a publicly available well database from Norway, GRANADA. General challenges associated with such databases typically involve clustering and bias. However, if information about the horizontal distance to the nearest bedrock outcrop (L) is included, does the spatial estimation of D improve? This idea was tested by comparing two cross-validation results: ordinary kriging (OK) where L was disregarded; and co-kriging (CK) where cross-covariance between D and L was included. The analysis showed only minor differences between OK and CK with respect to differences between estimation and true values. However, the CK results gave in general less estimation variance compared to the OK results. All observations were declustered and transformed to standard normal probability density functions before estimation and back-transformed for the cross-validation analysis. The semivariogram analysis gave correlation lengths for D and L of approx. 10 and 6 km. These correlations reduce the estimation variance in the cross-validation analysis because more than 50 % of the data material had two or more observations within a radius of 5 km. The small-scale variance of D, however, was about 50 % of the total variance, which gave an accuracy of less than 60 % for most of the cross-validation cases. Despite the noisy character of the observations, the analysis demonstrated that L can be used as secondary information to reduce the estimation variance of D.
Influential input classification in probabilistic multimedia models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.
1999-05-01
Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less
Quality control and quality assurance in genotypic data for genome-wide association studies
Laurie, Cathy C.; Doheny, Kimberly F.; Mirel, Daniel B.; Pugh, Elizabeth W.; Bierut, Laura J.; Bhangale, Tushar; Boehm, Frederick; Caporaso, Neil E.; Cornelis, Marilyn C.; Edenberg, Howard J.; Gabriel, Stacy B.; Harris, Emily L.; Hu, Frank B.; Jacobs, Kevin; Kraft, Peter; Landi, Maria Teresa; Lumley, Thomas; Manolio, Teri A.; McHugh, Caitlin; Painter, Ian; Paschall, Justin; Rice, John P.; Rice, Kenneth M.; Zheng, Xiuwen; Weir, Bruce S.
2011-01-01
Genome-wide scans of nucleotide variation in human subjects are providing an increasing number of replicated associations with complex disease traits. Most of the variants detected have small effects and, collectively, they account for a small fraction of the total genetic variance. Very large sample sizes are required to identify and validate findings. In this situation, even small sources of systematic or random error can cause spurious results or obscure real effects. The need for careful attention to data quality has been appreciated for some time in this field, and a number of strategies for quality control and quality assurance (QC/QA) have been developed. Here we extend these methods and describe a system of QC/QA for genotypic data in genome-wide association studies. This system includes some new approaches that (1) combine analysis of allelic probe intensities and called genotypes to distinguish gender misidentification from sex chromosome aberrations, (2) detect autosomal chromosome aberrations that may affect genotype calling accuracy, (3) infer DNA sample quality from relatedness and allelic intensities, (4) use duplicate concordance to infer SNP quality, (5) detect genotyping artifacts from dependence of Hardy-Weinberg equilibrium (HWE) test p-values on allelic frequency, and (6) demonstrate sensitivity of principal components analysis (PCA) to SNP selection. The methods are illustrated with examples from the ‘Gene Environment Association Studies’ (GENEVA) program. The results suggest several recommendations for QC/QA in the design and execution of genome-wide association studies. PMID:20718045
Endogenous fluorescence emission of the ovary
NASA Astrophysics Data System (ADS)
Utzinger, Urs; Kirkpatrick, Nathaniel D.; Drezek, Rebekah A.; Brewer, Molly A.
2005-03-01
Epithelial ovarian cancer has the highest mortality rate among the gynecologic cancers. Early detection would significantly improve survival and quality of life of women at increased risk to develop ovarian cancer. We have constructed a device to investigate endogenous signals of the ovarian tissue surface in the UV C to visible range and describe our initial investigation of the use of optical spectroscopy to characterize the condition of the ovary. We have acquired data from more than 33 patients. A table top spectroscopy system was used to collect endogenous fluorescence with a fiberoptic probe that is compatible with endoscopic techniques. Samples were broken into five groups: Normal-Low Risk (for developing ovarian cancer) Normal-High Risk, Benign, and Cancer. Rigorous statistical analysis was applied to the data using variance tests for direct intensity versus diagnostic group comparisons and principal component analysis (PCA) to study the variance of the whole data set. We conclude that the diagnostically most useful excitation wavelengths are located in the UV. Furthermore, our results indicate that UV B and C are most useful. A safety analysis indicates that UV-C imaging can be conducted at exposure levels below safety thresholds. We found that fluorescence excited in the UV-C and UV-B range increases from benign to normal to cancerous tissues. This is in contrast to the emission created with UV-A excitation which decreased in the same order. We hypothesize that an increase of protein production and a decrease of fluorescence contributions of the extracellular matrix could explain this behavior. Variance analysis also identified fluctuation of fluorescence at 320/380 which is associated with collagen cross link residues. Small differences were observed between the group at high risk and normal risk for ovarian cancer. High risk samples deviated towards the cancer group and low risk samples towards benign group.
NASA Astrophysics Data System (ADS)
Koch, Karl
2002-10-01
The Vogtland region, in the border region of Germany and the Czech Republic, is of special interest for the identification of seismic events on a local and regional scale, since both earthquakes and explosions occur frequently in the same area, and thus are relevant for discrimination research for verification of the Comprehensive Nuclear Test Ban Treaty. Previous research on event discrimination using spectral decay and variance from data recorded by the GERESS array indicated that spectral variance determined for the S phase for the seismic events in the Vogtland region seems to be the most promising parameter for event discrimination, because this parameter provides for almost complete separation of the earthquake and explosion populations. Almost the entire set of Vogtland events used in this research and more than 3000 local events detected in Germany in 1998 and 1999 were analysed to determine spectral slopes and variance for the P- and S-wave windows from stacked spectra of recordings at the GERESS array. The results suggest that small values for the spectral variance are associated not only with earthquakes in the Vogtland region, but also with earthquakes in other parts of Germany and neighbouring countries. While mining blasts show larger spectral variance values, mining-induced events yield a wide range of values, for example, in the Lubin area. A threshold-based identification scheme was applied; almost all events classified as earthquakes are found in seismically active regions. While the earthquakes are uniformly distributed throughout the day, events classified as explosions correlate with normal working hours, which is when blasting is done in Germany. In this study spectral variance provides good event discrimination for events in other parts of Germany, not only for the Vogtland region, showing that this identification parameter may be transported to other geological regions.
Risk factors for near-miss events and safety incidents in pediatric radiation therapy.
Baig, Nimrah; Wang, Jiangxia; Elnahal, Shereef; McNutt, Todd; Wright, Jean; DeWeese, Theodore; Terezakis, Stephanie
2018-05-01
Factors contributing to safety- or quality-related incidents (e.g. variances) in children are unknown. We identified clinical and RT treatment variables associated with risk for variances in a pediatric cohort. Using our institution's incident learning system, 81 patients age ≤21 years old who experienced variances were compared to 191 pediatric patients without variances. Clinical and RT treatment variables were evaluated as potential predictors for variances using univariate and multivariate analyses. Variances were primarily documentation errors (n = 46, 57%) and were most commonly detected during treatment planning (n = 14, 21%). Treatment planning errors constituted the majority (n = 16 out of 29, 55%) of near-misses and safety incidents (NMSI), which excludes workflow incidents. Therapists reported the majority of variances (n = 50, 62%). Physician cross-coverage (OR = 2.1, 95% CI = 1.04-4.38) and 3D conformal RT (OR = 2.3, 95% CI = 1.11-4.69) increased variance risk. Conversely, age >14 years (OR = 0.5, 95% CI = 0.28-0.88) and diagnosis of abdominal tumor (OR = 0.2, 95% CI = 0.04-0.59) decreased variance risk. Variances in children occurred in early treatment phases, but were detected at later workflow stages. Quality measures should be implemented during early treatment phases with a focus on younger children and those cared for by cross-covering physicians. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Samboju, Vishal; Adams, Matthew; Salgaonkar, Vasant; Diederich, Chris J.; Cunha, J. Adam M.
2017-02-01
The speed of sound (SOS) for ultrasound devices used for imaging soft tissue is often calibrated to water, 1540 m/s1 , despite in-vivo soft tissue SOS varying from 1450 to 1613 m/s2 . Images acquired with 1540 m/s and used in conjunction with stereotactic external coordinate systems can thus result in displacement errors of several millimeters. Ultrasound imaging systems are routinely used to guide interventional thermal ablation and cryoablation devices, or radiation sources for brachytherapy3 . Brachytherapy uses small radioactive pellets, inserted interstitially with needles under ultrasound guidance, to eradicate cancerous tissue4 . Since the radiation dose diminishes with distance from the pellet as 1/r2 , imaging uncertainty of a few millimeters can result in significant erroneous dose delivery5,6. Likewise, modeling of power deposition and thermal dose accumulations from ablative sources are also prone to errors due to placement offsets from SOS errors7 . This work presents a method of mitigating needle placement error due to SOS variances without the need of ionizing radiation2,8. We demonstrate the effects of changes in dosimetry in a prostate brachytherapy environment due to patientspecific SOS variances and the ability to mitigate dose delivery uncertainty. Electromagnetic (EM) sensors embedded in the brachytherapy ultrasound system provide information regarding 3D position and orientation of the ultrasound array. Algorithms using data from these two modalities are used to correct bmode images to account for SOS errors. While ultrasound localization resulted in >3 mm displacements, EM resolution was verified to <1 mm precision using custom-built phantoms with various SOS, showing 1% accuracy in SOS measurement.
2004-03-01
Breusch - Pagan test for constant variance of the residuals. Using Microsoft Excel® we calculate a p-value of 0.841237. This high p-value, which is above...our alpha of 0.05, indicates that our residuals indeed pass the Breusch - Pagan test for constant variance. In addition to the assumption tests , we...Wilk Test for Normality – Support (Reduced) Model (OLS) Finally, we perform a Breusch - Pagan test for constant variance of the residuals. Using
Is my study system good enough? A case study for identifying maternal effects.
Holand, Anna Marie; Steinsland, Ingelin
2016-06-01
In this paper, we demonstrate how simulation studies can be used to answer questions about identifiability and consequences of omitting effects from a model. The methodology is presented through a case study where identifiability of genetic and/or individual (environmental) maternal effects is explored. Our study system is a wild house sparrow ( Passer domesticus ) population with known pedigree. We fit pedigree-based (generalized) linear mixed models (animal models), with and without additive genetic and individual maternal effects, and use deviance information criterion (DIC) for choosing between these models. Pedigree and R-code for simulations are available. For this study system, the simulation studies show that only large maternal effects can be identified. The genetic maternal effect (and similar for individual maternal effect) has to be at least half of the total genetic variance to be identified. The consequences of omitting a maternal effect when it is present are explored. Our results indicate that the total (genetic and individual) variance are accounted for. When an individual (environmental) maternal effect is omitted from the model, this only influences the estimated (direct) individual (environmental) variance. When a genetic maternal effect is omitted from the model, both (direct) genetic and (direct) individual variance estimates are overestimated.
Relating the Hadamard Variance to MCS Kalman Filter Clock Estimation
NASA Technical Reports Server (NTRS)
Hutsell, Steven T.
1996-01-01
The Global Positioning System (GPS) Master Control Station (MCS) currently makes significant use of the Allan Variance. This two-sample variance equation has proven excellent as a handy, understandable tool, both for time domain analysis of GPS cesium frequency standards, and for fine tuning the MCS's state estimation of these atomic clocks. The Allan Variance does not explicitly converge for the nose types of alpha less than or equal to minus 3 and can be greatly affected by frequency drift. Because GPS rubidium frequency standards exhibit non-trivial aging and aging noise characteristics, the basic Allan Variance analysis must be augmented in order to (a) compensate for a dynamic frequency drift, and (b) characterize two additional noise types, specifically alpha = minus 3, and alpha = minus 4. As the GPS program progresses, we will utilize a larger percentage of rubidium frequency standards than ever before. Hence, GPS rubidium clock characterization will require more attention than ever before. The three sample variance, commonly referred to as a renormalized Hadamard Variance, is unaffected by linear frequency drift, converges for alpha is greater than minus 5, and thus has utility for modeling noise in GPS rubidium frequency standards. This paper demonstrates the potential of Hadamard Variance analysis in GPS operations, and presents an equation that relates the Hadamard Variance to the MCS's Kalman filter process noises.
Hilal, Saima; Kuijf, Hugo J.; Ikram, Mohammad Kamran; Xu, Xin; Tan, Boon Yeow; Venketasubramanian, Narayanaswamy; Postma, Albert; Biessels, Geert Jan; Chen, Christopher P. L. H.
2016-01-01
Background and Purpose Studies on the impact of small vessel disease (SVD) on cognition generally focus on white matter hyperintensity (WMH) volume. The extent to which WMH location relates to cognitive performance has received less attention, but is likely to be functionally important. We examined the relation between WMH location and cognition in a memory clinic cohort of patients with sporadic SVD. Methods A total of 167 patients with SVD were recruited from memory clinics. Assumption-free region of interest-based analyses based on major white matter tracts and voxel-wise analyses were used to determine the association between WMH location and executive functioning, visuomotor speed and memory. Results Region of interest-based analyses showed that WMHs located particularly within the anterior thalamic radiation and forceps minor were inversely associated with both executive functioning and visuomotor speed, independent of total WMH volume. Memory was significantly associated with WMH volume in the forceps minor, independent of total WMH volume. An independent assumption-free voxel-wise analysis identified strategic voxels in these same tracts. Region of interest-based analyses showed that WMH volume within the anterior thalamic radiation explained 6.8% of variance in executive functioning, compared to 3.9% for total WMH volume; WMH volume within the forceps minor explained 4.6% of variance in visuomotor speed and 4.2% of variance in memory, compared to 1.8% and 1.3% respectively for total WMH volume. Conclusions Our findings identify the anterior thalamic radiation and forceps minor as strategic white matter tracts in which WMHs are most strongly associated with cognitive impairment in memory clinic patients with SVD. WMH volumes in individual tracts explained more variance in cognition than total WMH burden, emphasizing the importance of lesion location when addressing the functional consequences of WMHs. PMID:27824925
Estimating forestland area change from inventory data
Paul Van Deusen; Francis Roesch; Thomas Wigley
2013-01-01
Simple methods for estimating the proportion of land changing from forest to nonforest are developed. Variance estimators are derived to facilitate significance tests. A power analysis indicates that 400 inventory plots are required to reliably detect small changes in net or gross forest loss. This is an important result because forest certification programs may...
Exploratory Study of Spirituality and Psychosocial Growth in College Students
ERIC Educational Resources Information Center
Reymann, Linda S.; Fialkowski, Geraldine M.; Stewart-Sicking, Joseph A.
2015-01-01
This study examined spirituality, personality, and psychosocial growth among 216 students at a small university in Maryland. Results demonstrated that faith maturity predicted unique variance in purpose in life. There was a main effect observed for gender among faith scores, as well as an interaction effect between gender and year in school among…
Determinants of Physical Activity in Middle School Children.
ERIC Educational Resources Information Center
Trost, Stewart G.; Saunders, Ruth; Ward, Dianne S.
2002-01-01
Evaluated the theory of reasoned action (TRA) and theory of planned behavior (TPB) in predicting moderate-to-vigorous physical activity (MVPA) in sixth grade students. Student surveys on physical activity behavior and attitudes and measurement of MVPA indicated that the TRA and TPB accounted for only a small percentage of the variance in MVPA. (SM)
Using the Monte Carlo (MC) method, this paper derives arithmetic and geometric means and associated variances of the net capillary drive parameter, G, that appears in the Parlange infiltration model, as a function of soil texture and antecedent soil moisture content. App...
Omnibus Tests for Interactions in Repeated Measures Designs with Dichotomous Dependent Variables.
ERIC Educational Resources Information Center
Serlin, Ronald C.; Marascuilo, Leonard A.
When examining a repeated measures design with independent groups for a significant group by trial interaction, classical analysis of variance or multivariate procedures can be used if the assumptions underlying the tests are met. Neither procedure may be justified for designs with small sample sizes and dichotomous dependent variables. An omnibus…
Genetic architechture and biological basis for feed efficiency in dairy cattle
USDA-ARS?s Scientific Manuscript database
The genetic architecture of residual feed intake (RFI) and related traits was evaluated using a dataset of 2,894 cows. A Bayesian analysis estimated that markers accounted for 14% of the variance in RFI, and that RFI had considerable genetic variation. Effects of marker windows were small, but QTL p...
An Evaluation of Psychophysical Models of Auditory Change Perception
ERIC Educational Resources Information Center
Micheyl, Christophe; Kaernbach, Christian; Demany, Laurent
2008-01-01
In many psychophysical experiments, the participant's task is to detect small changes along a given stimulus dimension or to identify the direction (e.g., upward vs. downward) of such changes. The results of these experiments are traditionally analyzed with a constant-variance Gaussian (CVG) model or a high-threshold (HT) model. Here, the authors…
Bayesian Structural Equation Modeling: A More Flexible Representation of Substantive Theory
ERIC Educational Resources Information Center
Muthen, Bengt; Asparouhov, Tihomir
2012-01-01
This article proposes a new approach to factor analysis and structural equation modeling using Bayesian analysis. The new approach replaces parameter specifications of exact zeros with approximate zeros based on informative, small-variance priors. It is argued that this produces an analysis that better reflects substantive theories. The proposed…
Gender differences in cognitive development.
Ardila, Alfredo; Rosselli, Monica; Matute, Esmeralda; Inozemtseva, Olga
2011-07-01
The potential effect of gender on intellectual abilities remains controversial. The purpose of this research was to analyze gender differences in cognitive test performance among children from continuous age groups. For this purpose, the normative data from 7 domains of the newly developed neuropsychological test battery, the Evaluación Neuropsicológica Infantil [Child Neuropsychological Assessment] (Matute, Rosselli, Ardila, & Ostrosky-Solis, 2007), were analyzed. The sample included 788 monolingual children (350 boys, 438 girls) ages 5 to 16 years from Mexico and Colombia. Gender differences were observed in oral language (language expression and language comprehension), spatial abilities (recognition of pictures seen from different angles), and visual (Object Integration Test) and tactile perceptual tasks, with boys outperforming girls in most cases, except for the tactile tasks. Gender accounted for only a very small percentage of the variance (1%-3%). Gender x Age interactions were observed for the tactile tasks only. It was concluded that gender differences during cognitive development are minimal, appear in only a small number of tests, and account for only a low percentage of the score variance. PsycINFO Database Record (c) 2011 APA, all rights reserved
Variance decomposition in stochastic simulators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Le Maître, O. P., E-mail: olm@limsi.fr; Knio, O. M., E-mail: knio@duke.edu; Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance.more » Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.« less
The contribution of the mitochondrial genome to sex-specific fitness variance.
Smith, Shane R T; Connallon, Tim
2017-05-01
Maternal inheritance of mitochondrial DNA (mtDNA) facilitates the evolutionary accumulation of mutations with sex-biased fitness effects. Whereas maternal inheritance closely aligns mtDNA evolution with natural selection in females, it makes it indifferent to evolutionary changes that exclusively benefit males. The constrained response of mtDNA to selection in males can lead to asymmetries in the relative contributions of mitochondrial genes to female versus male fitness variation. Here, we examine the impact of genetic drift and the distribution of fitness effects (DFE) among mutations-including the correlation of mutant fitness effects between the sexes-on mitochondrial genetic variation for fitness. We show how drift, genetic correlations, and skewness of the DFE determine the relative contributions of mitochondrial genes to male versus female fitness variance. When mutant fitness effects are weakly correlated between the sexes, and the effective population size is large, mitochondrial genes should contribute much more to male than to female fitness variance. In contrast, high fitness correlations and small population sizes tend to equalize the contributions of mitochondrial genes to female versus male variance. We discuss implications of these results for the evolution of mitochondrial genome diversity and the genetic architecture of female and male fitness. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
NASA Astrophysics Data System (ADS)
Graham, Wendy; Destouni, Georgia; Demmy, George; Foussereau, Xavier
1998-07-01
The methodology developed in Destouni and Graham [Destouni, G., Graham, W.D., 1997. The influence of observation method on local concentration statistics in the subsurface. Water Resour. Res. 33 (4) 663-676.] for predicting locally measured concentration statistics for solute transport in heterogeneous porous media under saturated flow conditions is applied to the prediction of conservative nonreactive solute transport in the vadose zone where observations are obtained by soil coring. Exact analytical solutions are developed for both the mean and variance of solute concentrations measured in discrete soil cores using a simplified physical model for vadose-zone flow and solute transport. Theoretical results show that while the ensemble mean concentration is relatively insensitive to the length-scale of the measurement, predictions of the concentration variance are significantly impacted by the sampling interval. Results also show that accounting for vertical heterogeneity in the soil profile results in significantly less spreading in the mean and variance of the measured solute breakthrough curves, indicating that it is important to account for vertical heterogeneity even for relatively small travel distances. Model predictions for both the mean and variance of locally measured solute concentration, based on independently estimated model parameters, agree well with data from a field tracer test conducted in Manatee County, Florida.
NASA Technical Reports Server (NTRS)
Riddick, Stephen E.; Hinton, David A.
2000-01-01
A study has been performed on a computer code modeling an aircraft wake vortex spacing system during final approach. This code represents an initial engineering model of a system to calculate reduced approach separation criteria needed to increase airport productivity. This report evaluates model sensitivity toward various weather conditions (crosswind, crosswind variance, turbulent kinetic energy, and thermal gradient), code configurations (approach corridor option, and wake demise definition), and post-processing techniques (rounding of provided spacing values, and controller time variance).
Least Squares Solution of Small Sample Multiple-Master PSInSAR System
NASA Astrophysics Data System (ADS)
Zhang, Lei; Ding, Xiao Li; Lu, Zhong
2010-03-01
In this paper we propose a least squares based approach for multi-temporal SAR interferometry that allows to estimate the deformation rate with no need of phase unwrapping. The approach utilizes a series of multi-master wrapped differential interferograms with short baselines and only focuses on the arcs constructed by two nearby points at which there are no phase ambiguities. During the estimation an outlier detector is used to identify and remove the arcs with phase ambiguities, and pseudoinverse of priori variance component matrix is taken as the weight of correlated observations in the model. The parameters at points can be obtained by an indirect adjustment model with constraints when several reference points are available. The proposed approach is verified by a set of simulated data.
Rough surface scattering based on facet model
NASA Technical Reports Server (NTRS)
Khamsi, H. R.; Fung, A. K.; Ulaby, F. T.
1974-01-01
A model for the radar return from bare ground was developed to calculate the radar cross section of bare ground and the effect of the frequency averaging on the reduction of the variance of the return. It is shown that, by assuming that the distribution of the slope to be Gaussian and that the distribution of the length of the facet to be in the form of the positive side of a Gaussian distribution, the results are in good agreement with experimental data collected by an 8- to 18-GHz radar spectrometer system. It is also shown that information on the exact correlation length of the small structure on the ground is not necessary; an effective correlation length may be calculated based on the facet model and the wavelength of the incident wave.
O'Shea, Thomas J.
1980-01-01
The tiny (3.1–3.8 g) vespcrtilionid bat Pipistrellus nanus was studied in Kenya palm-thatched roofs from May 1973 to July 1974. Roosting social organization and related activities and behavior are described. ♂♂ held diurnal roosting territories where ♀♀ gathered in small and compositionally labile groups, attracted to the most vocal ♂♂. Annual variation in population-wide aspects of social organization follows predictable seasonal changes in climate and predator abundance. Variability between individuals follows a common mammalian pattern: high male competition for ♀, variance in presumed male reproductive success, and a mating system resembling one based on resource defense polygyny. Social organization in this population contrasts with that known from studies of other P. nanus populations.
A social systems model of nursing home use.
Wolf, R S
1978-01-01
Causal modeling (path analysis) was applied to data from the 39 mental health catchment areas of Massachusetts to analyze the effects of sociocultural and health-resource variables on long-term-care utilization. The variables chosen explained 53 percent of the variance of long-term-care use by persons 60 and older: 41 percent was explained by the sociocultural variables and 12 percent by the health-resource variables. With data adjusted for age, the major determinant of long-term-care use was ethnicity: less long-term care was used in areas with more persons who were foreign-born or had a foreign-born parent. The effects of other health resources (supply of primary care physicians and use of mental and general (short-term) hospitals) were small and negative. PMID:418027
An analysis of polygenes affecting wing shape on chromosome 2 in Drosophila melanogaster.
Weber, K; Eisman, R; Higgins, S; Morey, L; Patty, A; Tausek, M; Zeng, Z B
2001-01-01
Genetic effects on an index of wing shape on chromosome 2 of Drosophila melanogaster were mapped using isogenic recombinants with transposable element markers. At least 10 genes with small additive effects are dispersed evenly along the chromosome. Many interactions exist, with only small net effects in homozygous recombinants and little effect on phenotypic variance. Heterozygous chromosome segments show almost no dominance. Pleiotropic effects on leg shape are only minor. At first view, wing shape genes form a rather homogeneous class, but certain complexities remain unresolved. PMID:11729152
Genetic control of plant height in European winter wheat cultivars.
Würschum, Tobias; Langer, Simon M; Longin, C Friedrich H
2015-05-01
Plant height variation in European winter wheat cultivars is mainly controlled by the Rht - D1 and Rht - B1 semi-dwarfing genes, but also by other medium- or small-effect QTL and potentially epistatic QTL enabling fine adjustments of plant height. Plant height is an important goal in wheat (Triticum aestivum L.) breeding as it affects crop performance and thus yield and quality. The aim of this study was to investigate the genetic control of plant height in European winter wheat cultivars. To this end, a panel of 410 winter wheat varieties from across Europe was evaluated for plant height in multi-location field trials and genotyped for the candidate loci Rht-B1, Rht-D1, Rht8, Ppd-B1 copy number variation and Ppd-D1 as well as by a genotyping-by-sequencing approach yielding 23,371 markers with known map position. We found that Rht-D1 and Rht-B1 had the largest effects on plant height in this cultivar collection explaining 40.9 and 15.5% of the genotypic variance, respectively, while Ppd-D1 and Rht8 accounted for 3.0 and 2.0% of the variance, respectively. A genome-wide scan for marker-trait associations yielded two additional medium-effect QTL located on chromosomes 6A and 5B explaining 11.0 and 5.7% of the genotypic variance after the effects of the candidate loci were accounted for. In addition, we identified several small-effect QTL as well as epistatic QTL contributing to the genetic architecture of plant height. Taken together, our results show that the two Rht-1 semi-dwarfing genes are the major sources of variation in European winter wheat cultivars and that other small- or medium-effect QTL and potentially epistatic QTL enable fine adjustments in plant height.
Genung, Mark A; Fox, Jeremy; Williams, Neal M; Kremen, Claire; Ascher, John; Gibbs, Jason; Winfree, Rachael
2017-07-01
The relationship between biodiversity and the stability of ecosystem function is a fundamental question in community ecology, and hundreds of experiments have shown a positive relationship between species richness and the stability of ecosystem function. However, these experiments have rarely accounted for common ecological patterns, most notably skewed species abundance distributions and non-random extinction risks, making it difficult to know whether experimental results can be scaled up to larger, less manipulated systems. In contrast with the prolific body of experimental research, few studies have examined how species richness affects the stability of ecosystem services at more realistic, landscape scales. The paucity of these studies is due in part to a lack of analytical methods that are suitable for the correlative structure of ecological data. A recently developed method, based on the Price equation from evolutionary biology, helps resolve this knowledge gap by partitioning the effect of biodiversity into three components: richness, composition, and abundance. Here, we build on previous work and present the first derivation of the Price equation suitable for analyzing temporal variance of ecosystem services. We applied our new derivation to understand the temporal variance of crop pollination services in two study systems (watermelon and blueberry) in the mid-Atlantic United States. In both systems, but especially in the watermelon system, the stronger driver of temporal variance of ecosystem services was fluctuations in the abundance of common bee species, which were present at nearly all sites regardless of species richness. In contrast, temporal variance of ecosystem services was less affected by differences in species richness, because lost and gained species were rare. Thus, the findings from our more realistic landscapes differ qualitatively from the findings of biodiversity-stability experiments. © 2017 by the Ecological Society of America.
Arbuscular mycorrhizal fungal communities are phylogenetically clustered at small scales
Horn, Sebastian; Caruso, Tancredi; Verbruggen, Erik; Rillig, Matthias C; Hempel, Stefan
2014-01-01
Next-generation sequencing technologies with markers covering the full Glomeromycota phylum were used to uncover phylogenetic community structure of arbuscular mycorrhizal fungi (AMF) associated with Festuca brevipila. The study system was a semi-arid grassland with high plant diversity and a steep environmental gradient in pH, C, N, P and soil water content. The AMF community in roots and rhizosphere soil were analyzed separately and consisted of 74 distinct operational taxonomic units (OTUs) in total. Community-level variance partitioning showed that the role of environmental factors in determining AM species composition was marginal when controlling for spatial autocorrelation at multiple scales. Instead, phylogenetic distance and spatial distance were major correlates of AMF communities: OTUs that were more closely related (and which therefore may have similar traits) were more likely to co-occur. This pattern was insensitive to phylogenetic sampling breadth. Given the minor effects of the environment, we propose that at small scales closely related AMF positively associate through biotic factors such as plant-AMF filtering and interactions within the soil biota. PMID:24824667
GmDREB1 overexpression affects the expression of microRNAs in GM wheat seeds
Niu, Fengjuan; Hu, Zheng; Chen, Rui; Zhang, Hui
2017-01-01
MicroRNAs (miRNAs) are small regulators of gene expression that act on many different molecular and biochemical processes in eukaryotes. To date, miRNAs have not been considered in the current evaluation system for GM crops. In this study, small RNAs from the dry seeds of a GM wheat line overexpressing GmDREB1 and non-GM wheat cultivars were investigated using deep sequencing technology and bioinformatic approaches. As a result, 23 differentially expressed miRNAs in dry seeds were identified and confirmed between GM wheat and a non-GM acceptor. Notably, more differentially expressed tae-miRNAs between non-GM wheat varieties were found, indicating that the degree of variance between non-GM cultivars was considerably higher than that induced by the transgenic event. Most of the target genes of these differentially expressed miRNAs between GM wheat and a non-GM acceptor were associated with abiotic stress, in accordance with the product concept of GM wheat in improving drought and salt tolerance. Our data provided useful information and insights into the evaluation of miRNA expression in edible GM crops. PMID:28459812
GmDREB1 overexpression affects the expression of microRNAs in GM wheat seeds.
Jiang, Qiyan; Sun, Xianjun; Niu, Fengjuan; Hu, Zheng; Chen, Rui; Zhang, Hui
2017-01-01
MicroRNAs (miRNAs) are small regulators of gene expression that act on many different molecular and biochemical processes in eukaryotes. To date, miRNAs have not been considered in the current evaluation system for GM crops. In this study, small RNAs from the dry seeds of a GM wheat line overexpressing GmDREB1 and non-GM wheat cultivars were investigated using deep sequencing technology and bioinformatic approaches. As a result, 23 differentially expressed miRNAs in dry seeds were identified and confirmed between GM wheat and a non-GM acceptor. Notably, more differentially expressed tae-miRNAs between non-GM wheat varieties were found, indicating that the degree of variance between non-GM cultivars was considerably higher than that induced by the transgenic event. Most of the target genes of these differentially expressed miRNAs between GM wheat and a non-GM acceptor were associated with abiotic stress, in accordance with the product concept of GM wheat in improving drought and salt tolerance. Our data provided useful information and insights into the evaluation of miRNA expression in edible GM crops.
Mapping the Spread of Methamphetamine Abuse in California From 1995 to 2008
Ponicki, William R.; Remer, Lillian G.; Waller, Lance A.; Zhu, Li; Gorman, Dennis M.
2013-01-01
Objectives. From 1983 to 2008, the incidence of methamphetamine abuse and dependence (MA) presenting at hospitals in California increased 13-fold. We assessed whether this growth could be characterized as a drug epidemic. Methods. We geocoded MA discharges to residential zip codes from 1995 through 2008. We related discharges to population and environmental characteristics using Bayesian Poisson conditional autoregressive models, correcting for small area effects and spatial misalignment and enabling an assessment of contagion between areas. Results. MA incidence increased exponentially in 3 phases interrupted by implementation of laws limiting access to methamphetamine precursors. MA growth from 1999 through 2008 was 17% per year. MA was greatest in areas with larger White or Hispanic low-income populations, small household sizes, and good connections to highway systems. Spatial misalignment was a source of bias in estimated effects. Spatial autocorrelation was substantial, accounting for approximately 80% of error variance in the model. Conclusions. From 1995 through 2008, MA exhibited signs of growth and spatial spread characteristic of drug epidemics, spreading most rapidly through low-income White and Hispanic populations living outside dense urban areas. PMID:23078474
Yang, Yi; Tokita, Midori; Ishiguchi, Akira
2018-01-01
A number of studies revealed that our visual system can extract different types of summary statistics, such as the mean and variance, from sets of items. Although the extraction of such summary statistics has been studied well in isolation, the relationship between these statistics remains unclear. In this study, we explored this issue using an individual differences approach. Observers viewed illustrations of strawberries and lollypops varying in size or orientation and performed four tasks in a within-subject design, namely mean and variance discrimination tasks with size and orientation domains. We found that the performances in the mean and variance discrimination tasks were not correlated with each other and demonstrated that extractions of the mean and variance are mediated by different representation mechanisms. In addition, we tested the relationship between performances in size and orientation domains for each summary statistic (i.e. mean and variance) and examined whether each summary statistic has distinct processes across perceptual domains. The results illustrated that statistical summary representations of size and orientation may share a common mechanism for representing the mean and possibly for representing variance. Introspections for each observer performing the tasks were also examined and discussed.
Development of a technique for estimating noise covariances using multiple observers
NASA Technical Reports Server (NTRS)
Bundick, W. Thomas
1988-01-01
Friedland's technique for estimating the unknown noise variances of a linear system using multiple observers has been extended by developing a general solution for the estimates of the variances, developing the statistics (mean and standard deviation) of these estimates, and demonstrating the solution on two examples.
Analysis of Variance: What Is Your Statistical Software Actually Doing?
ERIC Educational Resources Information Center
Li, Jian; Lomax, Richard G.
2011-01-01
Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…
40 CFR 260.30 - Non-waste determinations and variances from classification as a solid waste.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 27 2012-07-01 2012-07-01 false Non-waste determinations and variances from classification as a solid waste. 260.30 Section 260.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking...
40 CFR 260.30 - Non-waste determinations and variances from classification as a solid waste.
Code of Federal Regulations, 2010 CFR
2010-07-01
... from classification as a solid waste. 260.30 Section 260.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.30 Non-waste determinations and variances from classification as a solid waste. In...
40 CFR 260.30 - Non-waste determinations and variances from classification as a solid waste.
Code of Federal Regulations, 2011 CFR
2011-07-01
... from classification as a solid waste. 260.30 Section 260.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.30 Non-waste determinations and variances from classification as a solid waste. In...
40 CFR 260.30 - Non-waste determinations and variances from classification as a solid waste.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 27 2013-07-01 2013-07-01 false Non-waste determinations and variances from classification as a solid waste. 260.30 Section 260.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking...
40 CFR 260.30 - Non-waste determinations and variances from classification as a solid waste.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 26 2014-07-01 2014-07-01 false Non-waste determinations and variances from classification as a solid waste. 260.30 Section 260.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking...
Ocean Salinity Variance and the Global Water Cycle.
NASA Astrophysics Data System (ADS)
Schmitt, R. W.
2012-12-01
Ocean salinity variance is increasing and appears to be an indicator of rapid change in the global water cycle. While the small terrestrial water cycle does not reveal distinct trends, in part due to strong manipulation by civilization, the much larger oceanic water cycle seems to have an excellent proxy for its intensity in the contrasts in sea surface salinity (SSS). Change in the water cycle is arguably the most important challenge facing mankind. But how well do we understand the oceanic response? Does the ocean amplify SSS change to make it a hyper-sensitive indicator of change in the global water cycle? An overview of the research challenges to the oceanographic community for understanding the dominant component of the global water cycle is provided.
Early Warning Signals for Abrupt Change Raise False Alarm During Sea Ice Loss
NASA Astrophysics Data System (ADS)
Wagner, T. J. W.; Eisenman, I.
2015-12-01
Uncovering universal early warning signals for critical transitions has become a coveted goal in diverse scientific disciplines, ranging from climate science to financial mathematics. There has been a flurry of recent research proposing such signals, with increasing autocorrelation and increasing variance being among the most widely discussed candidates. A number of studies have suggested that increasing autocorrelation alone may suffice to signal an impending transition, although some others have questioned this. Here, we consider variance and autocorrelation in the context of sea ice loss in an idealized model of the global climate system. The model features no bifurcation, nor increased rate of retreat, as the ice disappears. Nonetheless, the autocorrelation of summer sea ice area is found to increase with diminishing sea ice cover in a global warming scenario. The variance, by contrast, decreases. A simple physical mechanism is proposed to explain the occurrence of increasing autocorrelation but not variance in the model when there is no approaching bifurcation. Additionally, a similar mechanism is shown to allow an increase in both indicators with no physically attainable bifurcation. This implies that relying on autocorrelation and variance as early warning signals can raise false alarms in the climate system, warning of "tipping points" that are not actually there.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-12-01
Using VLF frequencies, transmitted by the Navy`s network, for airborne remote sensing of the earth`s electrical, magnetic characteristics was first considered by the United States Geological Survey (USGS) around the mid 1970s. The first VLF system was designed and developed by the USGS for installation and operation on a single engine, fixed wing aircraft used by the Branch of Geophysics for geophysical surveying. The system consisted of five channels. Two E-field channels with sensors consisting of a fixed vertical loaded dipole antenna with pre-amp mounted on top of the fuselage and a gyro stabilized horizontal loaded dipole antenna with pre-ampmore » mounted on a tail boom. The three channel magnetic sensor consisted of three orthogonal coils mounted on the same gyro stabilized platform as the horizontal E-field antenna. The main features of the VLF receiver were: narrow band-width frequency selection using crystal filters, phase shifters for zeroing out system phase variances, phase-lock loops for generating real and quadrature gates, and synchronous detectors for generating real and quadrature outputs. In the mid 1990s the Branch of Geophysics designed and developed a two-channel E-field ground portable VLF system. The system was built using state-of-the-art circuit components and new concepts in circuit architecture. Small size, light weight, low power, durability, and reliability were key considerations in the design of the instrument. The primary purpose of the instrument was for collecting VLF data during ground surveys over small grid areas. Later the system was modified for installation on a Unmanned Airborne Vehicle (UAV). A series of three field trips were made to Easton, Maryland for testing and evaluating the system performance.« less
On the Design of Attitude-Heading Reference Systems Using the Allan Variance.
Hidalgo-Carrió, Javier; Arnold, Sascha; Poulakis, Pantelis
2016-04-01
The Allan variance is a method to characterize stochastic random processes. The technique was originally developed to characterize the stability of atomic clocks and has also been successfully applied to the characterization of inertial sensors. Inertial navigation systems (INS) can provide accurate results in a short time, which tend to rapidly degrade in longer time intervals. During the last decade, the performance of inertial sensors has significantly improved, particularly in terms of signal stability, mechanical robustness, and power consumption. The mass and volume of inertial sensors have also been significantly reduced, offering system-level design and accommodation advantages. This paper presents a complete methodology for the characterization and modeling of inertial sensors using the Allan variance, with direct application to navigation systems. Although the concept of sensor fusion is relatively straightforward, accurate characterization and sensor-information filtering is not a trivial task, yet they are essential for good performance. A complete and reproducible methodology utilizing the Allan variance, including all the intermediate steps, is described. An end-to-end (E2E) process for sensor-error characterization and modeling up to the final integration in the sensor-fusion scheme is explained in detail. The strength of this approach is demonstrated with representative tests on novel, high-grade inertial sensors. Experimental navigation results are presented from two distinct robotic applications: a planetary exploration rover prototype and an autonomous underwater vehicle (AUV).
Predicting vacancy-mediated diffusion of interstitial solutes in α -Fe
NASA Astrophysics Data System (ADS)
Barouh, Caroline; Schuler, Thomas; Fu, Chu-Chun; Jourdan, Thomas
2015-09-01
Based on a systematic first-principles study, the lowest-energy migration mechanisms and barriers for small vacancy-solute clusters (VnXm ) are determined in α -Fe for carbon, nitrogen, and oxygen, which are the most frequent interstitial solutes in several transition metals. We show that the dominant clusters present at thermal equilibrium (V X and V X2 ) have very reduced mobility compared to isolated solutes, while clusters composed of a solute bound to a small vacancy cluster may be significantly more mobile. In particular, V3X is found to be the fastest cluster for all three solutes. This result relies on the large diffusivity of the most compact trivacancy in a bcc lattice. Therefore, it may also be expected for interstitial solutes in other bcc metals. In the case of iron, we find that V3X may be as fast as or even more mobile than an interstitial solute. At variance with common assumptions, the trapping of interstitial solutes by vacancies does not necessarily decrease the mobility of the solute. Additionally, cluster dynamics simulations are performed considering a simple iron system with supersaturation of vacancies, in order to investigate the impacts of small mobile vacancy-solute clusters on properties such as the transport of solute and the cluster size distributions.
Seeb, Lisa W.; Seeb, James E.; Arismendi, Ivan; Hernández, Cristián E.; Gajardo, Gonzalo; Galleguillos, Ricardo; Cádiz, Maria I.; Musleh, Selim S.
2015-01-01
Knowledge about the genetic underpinnings of invasions—a theme addressed by invasion genetics as a discipline—is still scarce amid well documented ecological impacts of non-native species on ecosystems of Patagonia in South America. One of the most invasive species in Patagonia’s freshwater systems and elsewhere is rainbow trout (Oncorhynchus mykiss). This species was introduced to Chile during the early twentieth century for stocking and promoting recreational fishing; during the late twentieth century was reintroduced for farming purposes and is now naturalized. We used population- and individual-based inference from single nucleotide polymorphisms (SNPs) to illuminate three objectives related to the establishment and naturalization of Rainbow Trout in Lake Llanquihue. This lake has been intensively used for trout farming during the last three decades. Our results emanate from samples collected from five inlet streams over two seasons, winter and spring. First, we found that significant intra- population (temporal) genetic variance was greater than inter-population (spatial) genetic variance, downplaying the importance of spatial divergence during the process of naturalization. Allele frequency differences between cohorts, consistent with variation in fish length between spring and winter collections, might explain temporal genetic differences. Second, individual-based Bayesian clustering suggested that genetic structure within Lake Llanquihue was largely driven by putative farm propagules found at one single stream during spring, but not in winter. This suggests that farm broodstock might migrate upstream to breed during spring at that particular stream. It is unclear whether interbreeding has occurred between “pure” naturalized and farm trout in this and other streams. Third, estimates of the annual number of breeders (N b) were below 73 in half of the collections, suggestive of genetically small and recently founded populations that might experience substantial genetic drift. Our results reinforce the notion that naturalized trout originated recently from a small yet genetically diverse source and that farm propagules might have played a significant role in the invasion of Rainbow Trout within a single lake with intensive trout farming. Our results also argue for proficient mitigation measures that include management of escapes and strategies to minimize unintentional releases from farm facilities. PMID:26544983
Response to selection while maximizing genetic variance in small populations.
Cervantes, Isabel; Gutiérrez, Juan Pablo; Meuwissen, Theo H E
2016-09-20
Rare breeds represent a valuable resource for future market demands. These populations are usually well-adapted, but their low census compromises the genetic diversity and future of these breeds. Since improvement of a breed for commercial traits may also confer higher probabilities of survival for the breed, it is important to achieve good responses to artificial selection. Therefore, efficient genetic management of these populations is essential to ensure that they respond adequately to genetic selection in possible future artificial selection scenarios. Scenarios that maximize the maximum genetic variance in a unique population could be a valuable option. The aim of this work was to study the effect of the maximization of genetic variance to increase selection response and improve the capacity of a population to adapt to a new environment/production system. We simulated a random scenario (A), a full-sib scenario (B), a scenario applying the maximum variance total (MVT) method (C), a MVT scenario with a restriction on increases in average inbreeding (D), a MVT scenario with a restriction on average individual increases in inbreeding (E), and a minimum coancestry scenario (F). Twenty replicates of each scenario were simulated for 100 generations, followed by 10 generations of selection. Effective population size was used to monitor the outcomes of these scenarios. Although the best response to selection was achieved in scenarios B and C, they were discarded because they are unpractical. Scenario A was also discarded because of its low response to selection. Scenario D yielded less response to selection and a smaller effective population size than scenario E, for which response to selection was higher during early generations because of the moderately structured population. In scenario F, response to selection was slightly higher than in Scenario E in the last generations. Application of MVT with a restriction on individual increases in inbreeding resulted in the largest response to selection during early generations, but if inbreeding depression is a concern, a minimum coancestry scenario is then a valuable alternative, in particular for a long-term response to selection.
NASA Astrophysics Data System (ADS)
Ramesh, N.; Cane, M. A.
2017-12-01
The complex coupled ocean-atmosphere system of the Tropical Pacific generates variability on timescales from intraseasonal to multidecadal. Pacific Decadal Variability (PDV) is among the key drivers of global climate, with effects on hydroclimate on several continents, marine ecosystems, and the rate of global mean surface temperature rise under anthropogenic greenhouse gas forcing. Predicting phase shifts in the PDV would therefore be highly useful. However, the small number of PDV phase shifts that have occurred in the observational record pose a substantial challenge to developing an understanding of the mechanisms that underlie decadal variability. In this study, we use a 100,000-year unforced simulation from an intermediate-complexity model of the Tropical Pacific region that has been shown to produce PDV comparable to that in the real world. We apply the Simplex Projection method to the NINO3 index from this model to reconstruct a shadow manifold that preserves the topology of the true attractor of this system. We find that the high- and low-variance phases of PDV emerge as a pair of regimes in a 3-dimensional state space, and that the transitions between decadal states lie in a highly predictable region of the attractor. We then use a random forest algorithm to develop a physical interpretation of the processes associated with these highly-predictable transitions. We find that transitions to low-variance states are most likely to occur approximately 2.5 years after an El Nino event, and that ocean-atmosphere variables in the southeastern Tropical Pacific play a crucial role in driving these transitions.
Compounding approach for univariate time series with nonstationary variances
NASA Astrophysics Data System (ADS)
Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich
2015-12-01
A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.
Compounding approach for univariate time series with nonstationary variances.
Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich
2015-12-01
A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Ming; Albrecht, Bruce A.; Ghate, Virendra P.
This study first illustrates the utility of using the Doppler spectrum width from millimetrewavelength radar to calculate the energy dissipation rate and then to use the energy dissipation rate to study turbulence structure in a continental stratocumulus cloud. It is shown that the turbulence kinetic energy dissipation rate calculated from the radar-measured Doppler spectrum width agrees well with that calculated from the Doppler velocity power spectrum. During the 16-h stratocumulus cloud event, the small-scale turbulence contributes 40%of the total velocity variance at cloud base, 50% at normalized cloud depth=0.8 and 70% at cloud top, which suggests that small-scale turbulence playsmore » a critical role near the cloud top where the entrainment and cloud-top radiative cooling act. The 16-h mean vertical integral length scale decreases from about 160 m at cloud base to 60 m at cloud top, and this signifies that the larger scale turbulence dominates around cloud base whereas the small-scale turbulence dominates around cloud top. The energy dissipation rate, total variance and squared spectrum width exhibit diurnal variations, but unlike marine stratocumulus they are high during the day and lowest around sunset at all levels; energy dissipation rates increase at night with the intensification of the cloud-top cooling. In the normalized coordinate system, the averaged coherent structure of updrafts is characterized by low energy dissipation rates in the updraft core and higher energy dissipation rates surround the updraft core at the top and along the edges. In contrast, the energy dissipation rate is higher inside the downdraft core indicating that the downdraft core is more turbulent. The turbulence around the updraft is weaker at night and stronger during the day; the opposite is true around the downdraft. This behaviour indicates that the turbulence in the downdraft has a diurnal cycle similar to that observed in marine stratocumuluswhereas the turbulence diurnal cycle in the updraft is reversed. For both updraft and downdraft, the maximum energy dissipation rate occurs at a cloud depth=0.8 where the maximum reflectivity and air acceleration or deceleration are observed. Resolved turbulence dominates near cloud base whereas unresolved turbulence dominates near cloud top. Similar to the unresolved turbulence, the resolved turbulence described by the radial velocity variance is higher in the downdraft than in the updraft. The impact of the surface heating on the resolved turbulence in the updraft decreases with height and diminishes around the cloud top. In both updrafts and downdrafts, the resolved turbulence increases with height and reaches a maximum at cloud depth=0.4 and then decreases to the cloud top; the resolved turbulence near cloud top, just as the unresolved turbulence, is mostly due to the cloud-top radiative cooling.« less
Post-stratified estimation: with-in strata and total sample size recommendations
James A. Westfall; Paul L. Patterson; John W. Coulston
2011-01-01
Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...
A Generally Robust Approach for Testing Hypotheses and Setting Confidence Intervals for Effect Sizes
ERIC Educational Resources Information Center
Keselman, H. J.; Algina, James; Lix, Lisa M.; Wilcox, Rand R.; Deering, Kathleen N.
2008-01-01
Standard least squares analysis of variance methods suffer from poor power under arbitrarily small departures from normality and fail to control the probability of a Type I error when standard assumptions are violated. This article describes a framework for robust estimation and testing that uses trimmed means with an approximate degrees of…
The Advantages of Using Planned Comparisons over Post Hoc Tests.
ERIC Educational Resources Information Center
Kuehne, Carolyn C.
There are advantages to using a priori or planned comparisons rather than omnibus multivariate analysis of variance (MANOVA) tests followed by post hoc or a posteriori testing. A small heuristic data set is used to illustrate these advantages. An omnibus MANOVA test was performed on the data followed by a post hoc test (discriminant analysis). A…
ERIC Educational Resources Information Center
Tipton, Elizabeth
2014-01-01
Replication studies allow for making comparisons and generalizations regarding the effectiveness of an intervention across different populations, versions of a treatment, settings and contexts, and outcomes. One method for making these comparisons across many replication studies is through the use of meta-analysis. A recent innovation in…
A Note on Sample Size and Solution Propriety for Confirmatory Factor Analytic Models
ERIC Educational Resources Information Center
Jackson, Dennis L.; Voth, Jennifer; Frey, Marc P.
2013-01-01
Determining an appropriate sample size for use in latent variable modeling techniques has presented ongoing challenges to researchers. In particular, small sample sizes are known to present concerns over sampling error for the variances and covariances on which model estimation is based, as well as for fit indexes and convergence failures. The…
Heat and solute tracers: how do they compare in heterogeneous aquifers?
Irvine, Dylan J; Simmons, Craig T; Werner, Adrian D; Graf, Thomas
2015-04-01
A comparison of groundwater velocity in heterogeneous aquifers estimated from hydraulic methods, heat and solute tracers was made using numerical simulations. Aquifer heterogeneity was described by geostatistical properties of the Borden, Cape Cod, North Bay, and MADE aquifers. Both heat and solute tracers displayed little systematic under- or over-estimation in velocity relative to a hydraulic control. The worst cases were under-estimates of 6.63% for solute and 2.13% for the heat tracer. Both under- and over-estimation of velocity from the heat tracer relative to the solute tracer occurred. Differences between the estimates from the tracer methods increased as the mean velocity decreased, owing to differences in rates of molecular diffusion and thermal conduction. The variance in estimated velocity using all methods increased as the variance in log-hydraulic conductivity (K) and correlation length scales increased. The variance in velocity for each scenario was remarkably small when compared to σ2 ln(K) for all methods tested. The largest variability identified was for the solute tracer where 95% of velocity estimates ranged by a factor of 19 in simulations where 95% of the K values varied by almost four orders of magnitude. For the same K-fields, this range was a factor of 11 for the heat tracer. The variance in estimated velocity was always lowest when using heat as a tracer. The study results suggest that a solute tracer will provide more understanding about the variance in velocity caused by aquifer heterogeneity and a heat tracer provides a better approximation of the mean velocity. © 2013, National Ground Water Association.
Johnson, Jacqueline L; Kreidler, Sarah M; Catellier, Diane J; Murray, David M; Muller, Keith E; Glueck, Deborah H
2015-11-30
We used theoretical and simulation-based approaches to study Type I error rates for one-stage and two-stage analytic methods for cluster-randomized designs. The one-stage approach uses the observed data as outcomes and accounts for within-cluster correlation using a general linear mixed model. The two-stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one-stage and two-stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one-stage and six two-stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two-stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one-stage model with Kenward-Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one-stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster-randomized trials, the Kenward-Roger method is the preferred one-stage approach. Copyright © 2015 John Wiley & Sons, Ltd.
Genetic Characterization of Dog Personality Traits.
Ilska, Joanna; Haskell, Marie J; Blott, Sarah C; Sánchez-Molano, Enrique; Polgar, Zita; Lofgren, Sarah E; Clements, Dylan N; Wiener, Pamela
2017-06-01
The genetic architecture of behavioral traits in dogs is of great interest to owners, breeders, and professionals involved in animal welfare, as well as to scientists studying the genetics of animal (including human) behavior. The genetic component of dog behavior is supported by between-breed differences and some evidence of within-breed variation. However, it is a challenge to gather sufficiently large datasets to dissect the genetic basis of complex traits such as behavior, which are both time-consuming and logistically difficult to measure, and known to be influenced by nongenetic factors. In this study, we exploited the knowledge that owners have of their dogs to generate a large dataset of personality traits in Labrador Retrievers. While accounting for key environmental factors, we demonstrate that genetic variance can be detected for dog personality traits assessed using questionnaire data. We identified substantial genetic variance for several traits, including fetching tendency and fear of loud noises, while other traits revealed negligibly small heritabilities. Genetic correlations were also estimated between traits; however, due to fairly large SEs, only a handful of trait pairs yielded statistically significant estimates. Genomic analyses indicated that these traits are mainly polygenic, such that individual genomic regions have small effects, and suggested chromosomal associations for six of the traits. The polygenic nature of these traits is consistent with previous behavioral genetics studies in other species, for example in mouse, and confirms that large datasets are required to quantify the genetic variance and to identify the individual genes that influence behavioral traits. Copyright © 2017 by the Genetics Society of America.
Influence of outliers on accuracy estimation in genomic prediction in plant breeding.
Estaghvirou, Sidi Boubacar Ould; Ogutu, Joseph O; Piepho, Hans-Peter
2014-10-01
Outliers often pose problems in analyses of data in plant breeding, but their influence on the performance of methods for estimating predictive accuracy in genomic prediction studies has not yet been evaluated. Here, we evaluate the influence of outliers on the performance of methods for accuracy estimation in genomic prediction studies using simulation. We simulated 1000 datasets for each of 10 scenarios to evaluate the influence of outliers on the performance of seven methods for estimating accuracy. These scenarios are defined by the number of genotypes, marker effect variance, and magnitude of outliers. To mimic outliers, we added to one observation in each simulated dataset, in turn, 5-, 8-, and 10-times the error SD used to simulate small and large phenotypic datasets. The effect of outliers on accuracy estimation was evaluated by comparing deviations in the estimated and true accuracies for datasets with and without outliers. Outliers adversely influenced accuracy estimation, more so at small values of genetic variance or number of genotypes. A method for estimating heritability and predictive accuracy in plant breeding and another used to estimate accuracy in animal breeding were the most accurate and resistant to outliers across all scenarios and are therefore preferable for accuracy estimation in genomic prediction studies. The performances of the other five methods that use cross-validation were less consistent and varied widely across scenarios. The computing time for the methods increased as the size of outliers and sample size increased and the genetic variance decreased. Copyright © 2014 Ould Estaghvirou et al.
Brauer, Chris J; Unmack, Peter J; Beheregaray, Luciano B
2017-12-01
Understanding whether small populations with low genetic diversity can respond to rapid environmental change via phenotypic plasticity is an outstanding research question in biology. RNA sequencing (RNA-seq) has recently provided the opportunity to examine variation in gene expression, a surrogate for phenotypic variation, in nonmodel species. We used a comparative RNA-seq approach to assess expression variation within and among adaptively divergent populations of a threatened freshwater fish, Nannoperca australis, found across a steep hydroclimatic gradient in the Murray-Darling Basin, Australia. These populations evolved under contrasting selective environments (e.g., dry/hot lowland; wet/cold upland) and represent opposite ends of the species' spectrum of genetic diversity and population size. We tested the hypothesis that environmental variation among isolated populations has driven the evolution of divergent expression at ecologically important genes using differential expression (DE) analysis and an anova-based comparative phylogenetic expression variance and evolution model framework based on 27,425 de novo assembled transcripts. Additionally, we tested whether gene expression variance within populations was correlated with levels of standing genetic diversity. We identified 290 DE candidate transcripts, 33 transcripts with evidence for high expression plasticity, and 50 candidates for divergent selection on gene expression after accounting for phylogenetic structure. Variance in gene expression appeared unrelated to levels of genetic diversity. Functional annotation of the candidate transcripts revealed that variation in water quality is an important factor influencing expression variation for N. australis. Our findings suggest that gene expression variation can contribute to the evolutionary potential of small populations. © 2017 John Wiley & Sons Ltd.
Ghodrat, Malihe; Naji, Ali; Komaie-Moghaddam, Haniyeh; Podgornik, Rudolf
2015-05-07
We study the effective interaction mediated by strongly coupled Coulomb fluids between dielectric surfaces carrying quenched, random monopolar charges with equal mean and variance, both when the Coulomb fluid consists only of mobile multivalent counterions and when it consists of an asymmetric ionic mixture containing multivalent and monovalent (salt) ions in equilibrium with an aqueous bulk reservoir. We analyze the consequences that follow from the interplay between surface charge disorder, dielectric and salt image effects, and the strong electrostatic coupling that results from multivalent counterions on the distribution of these ions and the effective interaction pressure they mediate between the surfaces. In a dielectrically homogeneous system, we show that the multivalent counterions are attracted towards the surfaces with a singular, disorder-induced potential that diverges logarithmically on approach to the surfaces, creating a singular but integrable counterion density profile that exhibits an algebraic divergence at the surfaces with an exponent that depends on the surface charge (disorder) variance. This effect drives the system towards a state of lower thermal 'disorder', one that can be described by a renormalized temperature, exhibiting thus a remarkable antifragility. In the presence of an interfacial dielectric discontinuity, the singular behavior of counterion density at the surfaces is removed but multivalent counterions are still accumulated much more strongly close to randomly charged surfaces as compared with uniformly charged ones. The interaction pressure acting on the surfaces displays in general a highly non-monotonic behavior as a function of the inter-surface separation with a prominent regime of attraction at small to intermediate separations. This attraction is caused directly by the combined effects from charge disorder and strong coupling electrostatics of multivalent counterions, which dominate the surface-surface repulsion due to the (equal) mean charges on the two surfaces and the osmotic pressure of monovalent ions residing between them. These effects can be quite significant even with a small degree of surface charge disorder relative to the mean surface charge. The strong coupling, disorder-induced attraction is typically much stronger than the van der Waals interaction between the surfaces, especially within a range of several nanometers for the inter-surface separation, where such effects are predicted to be most pronounced.
Attitudes and use of pornography in the Norwegian population 2002.
Traeen, Bente; Spitznogle, Kristin; Beverfjord, Alexandra
2004-05-01
The purpose of this study was to describe and analyze use of pornographic material in a representative sample of adult Norwegians. The data collection was carried out by means of a standardized questionnaire administered via personal telephone interviews. Among the 90% of participants who reported ever having examined pornography, 76% reported examining a pornographic magazine, 67% had watched a pornographic film, and 24% had examined pornography on the Internet. Significant gender differences emerged in the reporting. The percentage of men and women who reported frequent use of pornography was small. We identified three dimensions of attitudes toward pornography: pornography as a means of sexual enhancement, pornography as a moral issue, and social climate. These attitude dimensions were included in path models as intermediating variables between demographic variables (age, gender, and level of education) and frequency of reading or watching pornographic materials. These models explained 36% of the variance in frequency of watching pornographic films, 35% of the variance in frequency of reading pornographic magazines, and 21% of the variance in frequency of watching pornography on the Internet.
Mullan, Barbara; Wong, Cara; Kothe, Emily
2013-03-01
The aim of this study was to investigate whether the theory of planned behaviour (TPB) with the addition of risk awareness could predict breakfast consumption in a sample of adolescents from the UK and Australia. It was hypothesised that the TPB variables of attitudes, subjective norm and perceived behavioural control (PBC) would significantly predict intentions, and that inclusion of risk perception would increase the proportion of variance explained. Secondly it was hypothesised that intention and PBC would predict behaviour. Participants were recruited from secondary schools in Australia and the UK. A total of 613 participants completed the study (448 females, 165 males; mean=14years ±1.1). The TPB predicted 42.2% of the variance in intentions to eat breakfast. All variables significantly predicted intention with PBC as the strongest component. The addition of risk made a small but significant contribution to the prediction of intention. Together intention and PBC predicted 57.8% of the variance in breakfast consumption. Copyright © 2012 Elsevier Ltd. All rights reserved.
Yang, Jian; Bakshi, Andrew; Zhu, Zhihong; Hemani, Gibran; Vinkhuyzen, Anna A E; Lee, Sang Hong; Robinson, Matthew R; Perry, John R B; Nolte, Ilja M; van Vliet-Ostaptchouk, Jana V; Snieder, Harold; Esko, Tonu; Milani, Lili; Mägi, Reedik; Metspalu, Andres; Hamsten, Anders; Magnusson, Patrik K E; Pedersen, Nancy L; Ingelsson, Erik; Soranzo, Nicole; Keller, Matthew C; Wray, Naomi R; Goddard, Michael E; Visscher, Peter M
2015-10-01
We propose a method (GREML-LDMS) to estimate heritability for human complex traits in unrelated individuals using whole-genome sequencing data. We demonstrate using simulations based on whole-genome sequencing data that ∼97% and ∼68% of variation at common and rare variants, respectively, can be captured by imputation. Using the GREML-LDMS method, we estimate from 44,126 unrelated individuals that all ∼17 million imputed variants explain 56% (standard error (s.e.) = 2.3%) of variance for height and 27% (s.e. = 2.5%) of variance for body mass index (BMI), and we find evidence that height- and BMI-associated variants have been under natural selection. Considering the imperfect tagging of imputation and potential overestimation of heritability from previous family-based studies, heritability is likely to be 60-70% for height and 30-40% for BMI. Therefore, the missing heritability is small for both traits. For further discovery of genes associated with complex traits, a study design with SNP arrays followed by imputation is more cost-effective than whole-genome sequencing at current prices.
Genetic variations in the serotonergic system contribute to amygdala volume in humans.
Li, Jin; Chen, Chunhui; Wu, Karen; Zhang, Mingxia; Zhu, Bi; Chen, Chuansheng; Moyzis, Robert K; Dong, Qi
2015-01-01
The amygdala plays a critical role in emotion processing and psychiatric disorders associated with emotion dysfunction. Accumulating evidence suggests that amygdala structure is modulated by serotonin-related genes. However, there is a gap between the small contributions of single loci (less than 1%) and the reported 63-65% heritability of amygdala structure. To understand the "missing heritability," we systematically explored the contribution of serotonin genes on amygdala structure at the gene set level. The present study of 417 healthy Chinese volunteers examined 129 representative polymorphisms in genes from multiple biological mechanisms in the regulation of serotonin neurotransmission. A system-level approach using multiple regression analyses identified that nine SNPs collectively accounted for approximately 8% of the variance in amygdala volume. Permutation analyses showed that the probability of obtaining these findings by chance was low (p = 0.043, permuted for 1000 times). Findings showed that serotonin genes contribute moderately to individual differences in amygdala volume in a healthy Chinese sample. These results indicate that the system-level approach can help us to understand the genetic basis of a complex trait such as amygdala structure.
Dynamic cellular manufacturing system considering machine failure and workload balance
NASA Astrophysics Data System (ADS)
Rabbani, Masoud; Farrokhi-Asl, Hamed; Ravanbakhsh, Mohammad
2018-02-01
Machines are a key element in the production system and their failure causes irreparable effects in terms of cost and time. In this paper, a new multi-objective mathematical model for dynamic cellular manufacturing system (DCMS) is provided with consideration of machine reliability and alternative process routes. In this dynamic model, we attempt to resolve the problem of integrated family (part/machine cell) formation as well as the operators' assignment to the cells. The first objective minimizes the costs associated with the DCMS. The second objective optimizes the labor utilization and, finally, a minimum value of the variance of workload between different cells is obtained by the third objective function. Due to the NP-hard nature of the cellular manufacturing problem, the problem is initially validated by the GAMS software in small-sized problems, and then the model is solved by two well-known meta-heuristic methods including non-dominated sorting genetic algorithm and multi-objective particle swarm optimization in large-scaled problems. Finally, the results of the two algorithms are compared with respect to five different comparison metrics.
NASA Astrophysics Data System (ADS)
Athanasiadis, Panos; Gualdi, Silvio; Scaife, Adam A.; Bellucci, Alessio; Hermanson, Leon; MacLachlan, Craig; Arribas, Alberto; Materia, Stefano; Borelli, Andrea
2014-05-01
Low-frequency variability is a fundamental component of the atmospheric circulation. Extratropical teleconnections, the occurrence of blocking and the slow modulation of the jet streams and storm tracks are all different aspects of low-frequency variability. Part of the latter is attributed to the chaotic nature of the atmosphere and is inherently unpredictable. On the other hand, primarily as a response to boundary forcings, tropospheric low-frequency variability includes components that are potentially predictable. Seasonal forecasting faces the difficult task of predicting these components. Particularly referring to the extratropics, the current generation of seasonal forecasting systems seem to be approaching this target by realistically initializing most components of the climate system, using higher resolution and utilizing large ensemble sizes. Two seasonal prediction systems (Met-Office GloSea and CMCC-SPS-v1.5) are analyzed in terms of their representation of different aspects of extratropical low-frequency variability. The current operational Met-Office system achieves unprecedented high scores in predicting the winter-mean phase of the North Atlantic Oscillation (NAO, corr. 0.74 at 500 hPa) and the Pacific-N. American pattern (PNA, corr. 0.82). The CMCC system, considering its small ensemble size and course resolution, also achieves good scores (0.42 for NAO, 0.51 for PNA). Despite these positive features, both models suffer from biases in low-frequency variance, particularly in the N. Atlantic. Consequently, it is found that their intrinsic variability patterns (sectoral EOFs) differ significantly from the observed, and the known teleconnections are underrepresented. Regarding the representation of N. hemisphere blocking, after bias correction both systems exhibit a realistic climatology of blocking frequency. In this assessment, instantaneous blocking and large-scale persistent blocking events are identified using daily geopotential height fields at 500 hPa. Given a documented strong relationship between high-latitude N. Atlantic blocking and the NAO, one would expect a predictive skill for the seasonal frequency of blocking comparable to that of the NAO. However, this remains elusive. Future efforts should be in the direction of reducing model biases not only in the mean but also in variability (band-passed variances).
Yang, Yi; Tokita, Midori; Ishiguchi, Akira
2018-01-01
A number of studies revealed that our visual system can extract different types of summary statistics, such as the mean and variance, from sets of items. Although the extraction of such summary statistics has been studied well in isolation, the relationship between these statistics remains unclear. In this study, we explored this issue using an individual differences approach. Observers viewed illustrations of strawberries and lollypops varying in size or orientation and performed four tasks in a within-subject design, namely mean and variance discrimination tasks with size and orientation domains. We found that the performances in the mean and variance discrimination tasks were not correlated with each other and demonstrated that extractions of the mean and variance are mediated by different representation mechanisms. In addition, we tested the relationship between performances in size and orientation domains for each summary statistic (i.e. mean and variance) and examined whether each summary statistic has distinct processes across perceptual domains. The results illustrated that statistical summary representations of size and orientation may share a common mechanism for representing the mean and possibly for representing variance. Introspections for each observer performing the tasks were also examined and discussed. PMID:29399318
Applications of active adaptive noise control to jet engines
NASA Technical Reports Server (NTRS)
Shoureshi, Rahmat; Brackney, Larry
1993-01-01
During phase 2 research on the application of active noise control to jet engines, the development of multiple-input/multiple-output (MIMO) active adaptive noise control algorithms and acoustic/controls models for turbofan engines were considered. Specific goals for this research phase included: (1) implementation of a MIMO adaptive minimum variance active noise controller; and (2) turbofan engine model development. A minimum variance control law for adaptive active noise control has been developed, simulated, and implemented for single-input/single-output (SISO) systems. Since acoustic systems tend to be distributed, multiple sensors, and actuators are more appropriate. As such, the SISO minimum variance controller was extended to the MIMO case. Simulation and experimental results are presented. A state-space model of a simplified gas turbine engine is developed using the bond graph technique. The model retains important system behavior, yet is of low enough order to be useful for controller design. Expansion of the model to include multiple stages and spools is also discussed.
NASA Astrophysics Data System (ADS)
Gutiérrez, J. M.; Primo, C.; Rodríguez, M. A.; Fernández, J.
2008-02-01
We present a novel approach to characterize and graphically represent the spatiotemporal evolution of ensembles using a simple diagram. To this aim we analyze the fluctuations obtained as differences between each member of the ensemble and the control. The lognormal character of these fluctuations suggests a characterization in terms of the first two moments of the logarithmic transformed values. On one hand, the mean is associated with the exponential growth in time. On the other hand, the variance accounts for the spatial correlation and localization of fluctuations. In this paper we introduce the MVL (Mean-Variance of Logarithms) diagram to intuitively represent the interplay and evolution of these two quantities. We show that this diagram uncovers useful information about the spatiotemporal dynamics of the ensemble. Some universal features of the diagram are also described, associated either with the nonlinear system or with the ensemble method and illustrated using both toy models and numerical weather prediction systems.
High-performance broad-band spectroscopy for breast cancer risk assessment
NASA Astrophysics Data System (ADS)
Pawluczyk, Olga; Blackmore, Kristina; Dick, Samantha; Lilge, Lothar
2005-09-01
Medical diagnostics and screening are becoming increasingly demanding applications for spectroscopy. Although for many years the demand was satisfied with traditional spectrometers, analysis of complex biological samples has created a need for instruments capable of detecting small differences between samples. One such application is the measurement of absorbance of broad spectrum illumination by breast tissue, in order to quantify the breast tissue density. Studies have shown that breast cancer risk is closely associated with the measurement of radiographic breast density measurement. Using signal attenuation in transillumination spectroscopy in the 550-1100nm spectral range to measure breast density, has the potential to reduce the frequency of ionizing radiation, or making the test accessible to younger women; lower the cost and make the procedure more comfortable for the patient. In order to determine breast density, small spectral variances over a total attenuation of up to 8 OD have to be detected with the spectrophotometer. For this, a high performance system has been developed. The system uses Volume Phase Holographic (VPH) transmission grating, a 2D detector array for simultaneous registration of the whole spectrum with high signal to noise ratio, dedicated optical system specifically optimized for spectroscopic applications and many other improvements. The signal to noise ratio exceeding 50,000 for a single data acquisition eliminates the need for nitrogen cooled detectors and provides sufficient information to predict breast tissue density. Current studies employing transillumination breast spectroscopy (TIBS) relating to breast cancer risk assessment and monitoring are described.
NASA Astrophysics Data System (ADS)
Yang, C.; Zhang, Y. K.; Liang, X.
2014-12-01
Damping effect of an unsaturated-saturated system on tempospatialvariations of pressurehead and specificflux was investigated. The variance and covariance of both pressure head and specific flux in such a system due to a white noise infiltration were obtained by solving the moment equations of water flow in the system and verified with Monte Carlo simulations. It was found that both the pressure head and specific flux in this case are temporally non-stationary. The variance is zero at early time due to a deterministic initial condition used, then increases with time, and approaches anasymptotic limit at late time.Both pressure head and specific flux arealso non-stationary in space since the variance decreases from source to sink. The unsaturated-saturated systembehavesasa noise filterand it damps both the pressure head and specific flux, i.e., reduces their variations and enhances their correlation. The effect is stronger in upper unsaturated zone than in lower unsaturated zone and saturated zone. As a noise filter, the unsaturated-saturated system is mainly a low pass filter, filtering out the high frequency components in the time series of hydrological variables. The damping effect is much stronger in the saturated zone than in the saturated zone.
Software-assisted small bowel motility analysis using free-breathing MRI: feasibility study.
Bickelhaupt, Sebastian; Froehlich, Johannes M; Cattin, Roger; Raible, Stephan; Bouquet, Hanspeter; Bill, Urs; Patak, Michael A
2014-01-01
To validate a software prototype allowing for small bowel motility analysis in free breathing by comparing it to manual measurements. In all, 25 patients (15 male, 10 female; mean age 39 years) were included in this Institutional Review Board-approved, retrospective study. Magnetic resonance imaging (MRI) was performed on a 1.5T system after standardized preparation acquiring motility sequences in free breathing over 69-84 seconds. Small bowel motility was analyzed manually and with the software. Functional parameters, measurement time, and reproducibility were compared using the coefficient of variance and paired Student's t-test. Correlation was analyzed using Pearson's correlation coefficient and linear regression. The 25 segments were analyzed twice both by hand and using the software with automatic breathing correction. All assessed parameters significantly correlated between the methods (P < 0.01), but the scattering of repeated measurements was significantly (P < 0.01) lower using the software (3.90%, standard deviation [SD] ± 5.69) than manual examinations (9.77%, SD ± 11.08). The time needed was significantly less (P < 0.001) with the software (4.52 minutes, SD ± 1.58) compared to manual measurement, lasting 17.48 minutes for manual (SD ± 1.75 minutes). The use of the software proves reliable and faster small bowel motility measurements in free-breathing MRI compared to manual analyses. The new technique allows for analyses of prolonged sequences acquired in free breathing, improving the informative value of the examinations by amplifying the evaluable data. Copyright © 2013 Wiley Periodicals, Inc.
Contact interaction in an unitary ultracold Fermi gas
Pessoa, Renato; Gandolfi, Stefano; Vitiello, S. A.; ...
2015-12-16
An ultracold Fermi atomic gas at unitarity presents universal properties that in the dilute limit can be well described by a contact interaction. By employing a guiding function with correct boundary conditions and making simple modifications to the sampling procedure we are able to calculate the properties of a true contact interaction with the diffusion Monte Carlo method. The results are obtained with small variances. Our calculations for the Bertsch and contact parameters are in excellent agreement with published experiments. The possibility of using a more faithful description of ultracold atomic gases can help uncover additional features of ultracold atomicmore » gases. In addition, this work paves the way to perform quantum Monte Carlo calculations for other systems interacting with contact interactions, where the description using potentials with finite effective range might not be accurate.« less
GWAS of 126,559 Individuals Identifies Genetic Variants Associated with Educational Attainment
Rietveld, Cornelius A.; Medland, Sarah E.; Derringer, Jaime; Yang, Jian; Esko, Tõnu; Martin, Nicolas W.; Westra, Harm-Jan; Shakhbazov, Konstantin; Abdellaoui, Abdel; Agrawal, Arpana; Albrecht, Eva; Alizadeh, Behrooz Z.; Amin, Najaf; Barnard, John; Baumeister, Sebastian E.; Benke, Kelly S.; Bielak, Lawrence F.; Boatman, Jeffrey A.; Boyle, Patricia A.; Davies, Gail; de Leeuw, Christiaan; Eklund, Niina; Evans, Daniel S.; Ferhmann, Rudolf; Fischer, Krista; Gieger, Christian; Gjessing, Håkon K.; Hägg, Sara; Harris, Jennifer R.; Hayward, Caroline; Holzapfel, Christina; Ibrahim-Verbaas, Carla A.; Ingelsson, Erik; Jacobsson, Bo; Joshi, Peter K.; Jugessur, Astanand; Kaakinen, Marika; Kanoni, Stavroula; Karjalainen, Juha; Kolcic, Ivana; Kristiansson, Kati; Kutalik, Zoltán; Lahti, Jari; Lee, Sang H.; Lin, Peng; Lind, Penelope A.; Liu, Yongmei; Lohman, Kurt; Loitfelder, Marisa; McMahon, George; Vidal, Pedro Marques; Meirelles, Osorio; Milani, Lili; Myhre, Ronny; Nuotio, Marja-Liisa; Oldmeadow, Christopher J.; Petrovic, Katja E.; Peyrot, Wouter J.; Polašek, Ozren; Quaye, Lydia; Reinmaa, Eva; Rice, John P.; Rizzi, Thais S.; Schmidt, Helena; Schmidt, Reinhold; Smith, Albert V.; Smith, Jennifer A.; Tanaka, Toshiko; Terracciano, Antonio; van der Loos, Matthijs J.H.M.; Vitart, Veronique; Völzke, Henry; Wellmann, Jürgen; Yu, Lei; Zhao, Wei; Allik, Jüri; Attia, John R.; Bandinelli, Stefania; Bastardot, François; Beauchamp, Jonathan; Bennett, David A.; Berger, Klaus; Bierut, Laura J.; Boomsma, Dorret I.; Bültmann, Ute; Campbell, Harry; Chabris, Christopher F.; Cherkas, Lynn; Chung, Mina K.; Cucca, Francesco; de Andrade, Mariza; De Jager, Philip L.; De Neve, Jan-Emmanuel; Deary, Ian J.; Dedoussis, George V.; Deloukas, Panos; Dimitriou, Maria; Eiriksdottir, Gudny; Elderson, Martin F.; Eriksson, Johan G.; Evans, David M.; Faul, Jessica D.; Ferrucci, Luigi; Garcia, Melissa E.; Grönberg, Henrik; Gudnason, Vilmundur; Hall, Per; Harris, Juliette M.; Harris, Tamara B.; Hastie, Nicholas D.; Heath, Andrew C.; Hernandez, Dena G.; Hoffmann, Wolfgang; Hofman, Adriaan; Holle, Rolf; Holliday, Elizabeth G.; Hottenga, Jouke-Jan; Iacono, William G.; Illig, Thomas; Järvelin, Marjo-Riitta; Kähönen, Mika; Kaprio, Jaakko; Kirkpatrick, Robert M.; Kowgier, Matthew; Latvala, Antti; Launer, Lenore J.; Lawlor, Debbie A.; Lehtimäki, Terho; Li, Jingmei; Lichtenstein, Paul; Lichtner, Peter; Liewald, David C.; Madden, Pamela A.; Magnusson, Patrik K. E.; Mäkinen, Tomi E.; Masala, Marco; McGue, Matt; Metspalu, Andres; Mielck, Andreas; Miller, Michael B.; Montgomery, Grant W.; Mukherjee, Sutapa; Nyholt, Dale R.; Oostra, Ben A.; Palmer, Lyle J.; Palotie, Aarno; Penninx, Brenda; Perola, Markus; Peyser, Patricia A.; Preisig, Martin; Räikkönen, Katri; Raitakari, Olli T.; Realo, Anu; Ring, Susan M.; Ripatti, Samuli; Rivadeneira, Fernando; Rudan, Igor; Rustichini, Aldo; Salomaa, Veikko; Sarin, Antti-Pekka; Schlessinger, David; Scott, Rodney J.; Snieder, Harold; Pourcain, Beate St; Starr, John M.; Sul, Jae Hoon; Surakka, Ida; Svento, Rauli; Teumer, Alexander; Tiemeier, Henning; Rooij, Frank JAan; Van Wagoner, David R.; Vartiainen, Erkki; Viikari, Jorma; Vollenweider, Peter; Vonk, Judith M.; Waeber, Gérard; Weir, David R.; Wichmann, H.-Erich; Widen, Elisabeth; Willemsen, Gonneke; Wilson, James F.; Wright, Alan F.; Conley, Dalton; Davey-Smith, George; Franke, Lude; Groenen, Patrick J. F.; Hofman, Albert; Johannesson, Magnus; Kardia, Sharon L.R.; Krueger, Robert F.; Laibson, David; Martin, Nicholas G.; Meyer, Michelle N.; Posthuma, Danielle; Thurik, A. Roy; Timpson, Nicholas J.; Uitterlinden, André G.; van Duijn, Cornelia M.; Visscher, Peter M.; Benjamin, Daniel J.; Cesarini, David; Koellinger, Philipp D.
2013-01-01
A genome-wide association study of educational attainment was conducted in a discovery sample of 101,069 individuals and a replication sample of 25,490. Three independent SNPs are genome-wide significant (rs9320913, rs11584700, rs4851266), and all three replicate. Estimated effects sizes are small (R2 ≈ 0.02%), approximately 1 month of schooling per allele. A linear polygenic score from all measured SNPs accounts for ≈ 2% of the variance in both educational attainment and cognitive function. Genes in the region of the loci have previously been associated with health, cognitive, and central nervous system phenotypes, and bioinformatics analyses suggest the involvement of the anterior caudate nucleus. These findings provide promising candidate SNPs for follow-up work, and our effect size estimates can anchor power analyses in social-science genetics. PMID:23722424
Biometrics Foundation Documents
2009-01-01
a digital form. The quality of the sensor used has a significant impact on the recognition results. Example “sensors” could be digital cameras...Difficult to control sensor and channel variances that significantly impact capabilities Not sufficiently distinctive for identification over large...expressions, hairstyle, glasses, hats, makeup, etc. have on face recognition systems? Minor variances , such as those mentioned, will have a moderate
ERIC Educational Resources Information Center
Avenia-Tapper, Brianna; Llosa, Lorena
2015-01-01
This article addresses the issue of language-related construct-irrelevant variance on content area tests from the perspective of systemic functional linguistics. We propose that the construct relevance of language used in content area assessments, and consequent claims of construct-irrelevant variance and bias, should be determined according to…
2013-06-01
distribution is unlimited 12b. DISTRIBUTION CODE A 13. ABSTRACT (maximum 200 words) Some major defense acquisition programs (MDAPs) are cancelled...68 VI. PRACTICAL IMPLICATIONS FOR DEFENSE ACQUISITION .....................73 A . OVERVIEW...Contract Performance Report C /SCSC Cost/Schedule Control Systems Criteria CV Cost Variance CV % Cost Variance Percentage DAE Defense Acquisition
Non-destructive X-ray Computed Tomography (XCT) Analysis of Sediment Variance in Marine Cores
NASA Astrophysics Data System (ADS)
Oti, E.; Polyak, L. V.; Dipre, G.; Sawyer, D.; Cook, A.
2015-12-01
Benthic activity within marine sediments can alter the physical properties of the sediment as well as indicate nutrient flux and ocean temperatures. We examine burrowing features in sediment cores from the western Arctic Ocean collected during the 2005 Healy-Oden TransArctic Expedition (HOTRAX) and from the Gulf of Mexico Integrated Ocean Drilling Program (IODP) Expedition 308. While traditional methods for studying bioturbation require physical dissection of the cores, we assess burrowing using an X-ray computed tomography (XCT) scanner. XCT noninvasively images the sediment cores in three dimensions and produces density sensitive images suitable for quantitative analysis. XCT units are recorded as Hounsfield Units (HU), where -999 is air, 0 is water, and 4000-5000 would be a higher density mineral, such as pyrite. We rely on the fundamental assumption that sediments are deposited horizontally, and we analyze the variance over each flat-lying slice. The variance describes the spread of pixel values over a slice. When sediments are reworked, drawing higher and lower density matrix into a layer, the variance increases. Examples of this can be seen in two slices in core 19H-3A from Site U1324 of IODP Expedition 308. The first slice, located 165.6 meters below sea floor consists of relatively undisturbed sediment. Because of this, the majority of the sediment values fall between 1406 and 1497 HU, thus giving the slice a comparatively small variance of 819.7. The second slice, located 166.1 meters below sea floor, features a lower density sediment matrix disturbed by burrow tubes and the inclusion of a high density mineral. As a result, the Hounsfield Units have a larger variance of 1,197.5, which is a result of sediment matrix values that range from 1220 to 1260 HU, the high-density mineral value of 1920 HU and the burrow tubes that range from 1300 to 1410 HU. Analyzing this variance allows us to observe changes in the sediment matrix and more specifically capture where, and to what extent, the burrow tubes deviate from the sediment matrix. Future research will correlate changes in variance due to bioturbation to other features indicating ocean temperatures and nutrient flux, such as foraminifera counts and oxygen isotope data.
Measuring kinetics of complex single ion channel data using mean-variance histograms.
Patlak, J B
1993-07-01
The measurement of single ion channel kinetics is difficult when those channels exhibit subconductance events. When the kinetics are fast, and when the current magnitudes are small, as is the case for Na+, Ca2+, and some K+ channels, these difficulties can lead to serious errors in the estimation of channel kinetics. I present here a method, based on the construction and analysis of mean-variance histograms, that can overcome these problems. A mean-variance histogram is constructed by calculating the mean current and the current variance within a brief "window" (a set of N consecutive data samples) superimposed on the digitized raw channel data. Systematic movement of this window over the data produces large numbers of mean-variance pairs which can be assembled into a two-dimensional histogram. Defined current levels (open, closed, or sublevel) appear in such plots as low variance regions. The total number of events in such low variance regions is estimated by curve fitting and plotted as a function of window width. This function decreases with the same time constants as the original dwell time probability distribution for each of the regions. The method can therefore be used: 1) to present a qualitative summary of the single channel data from which the signal-to-noise ratio, open channel noise, steadiness of the baseline, and number of conductance levels can be quickly determined; 2) to quantify the dwell time distribution in each of the levels exhibited. In this paper I present the analysis of a Na+ channel recording that had a number of complexities. The signal-to-noise ratio was only about 8 for the main open state, open channel noise, and fast flickers to other states were present, as were a substantial number of subconductance states. "Standard" half-amplitude threshold analysis of these data produce open and closed time histograms that were well fitted by the sum of two exponentials, but with apparently erroneous time constants, whereas the mean-variance histogram technique provided a more credible analysis of the open, closed, and subconductance times for the patch. I also show that the method produces accurate results on simulated data in a wide variety of conditions, whereas the half-amplitude method, when applied to complex simulated data shows the same errors as were apparent in the real data. The utility and the limitations of this new method are discussed.
The effect of model uncertainty on some optimal routing problems
NASA Technical Reports Server (NTRS)
Mohanty, Bibhu; Cassandras, Christos G.
1991-01-01
The effect of model uncertainties on optimal routing in a system of parallel queues is examined. The uncertainty arises in modeling the service time distribution for the customers (jobs, packets) to be served. For a Poisson arrival process and Bernoulli routing, the optimal mean system delay generally depends on the variance of this distribution. However, as the input traffic load approaches the system capacity the optimal routing assignment and corresponding mean system delay are shown to converge to a variance-invariant point. The implications of these results are examined in the context of gradient-based routing algorithms. An example of a model-independent algorithm using online gradient estimation is also included.
LMI-Based Fuzzy Optimal Variance Control of Airfoil Model Subject to Input Constraints
NASA Technical Reports Server (NTRS)
Swei, Sean S.M.; Ayoubi, Mohammad A.
2017-01-01
This paper presents a study of fuzzy optimal variance control problem for dynamical systems subject to actuator amplitude and rate constraints. Using Takagi-Sugeno fuzzy modeling and dynamic Parallel Distributed Compensation technique, the stability and the constraints can be cast as a multi-objective optimization problem in the form of Linear Matrix Inequalities. By utilizing the formulations and solutions for the input and output variance constraint problems, we develop a fuzzy full-state feedback controller. The stability and performance of the proposed controller is demonstrated through its application to the airfoil flutter suppression.
Lachowiec, Jennifer; Shen, Xia; Queitsch, Christine; Carlborg, Örjan
2015-01-01
Efforts to identify loci underlying complex traits generally assume that most genetic variance is additive. Here, we examined the genetics of Arabidopsis thaliana root length and found that the genomic narrow-sense heritability for this trait in the examined population was statistically zero. The low amount of additive genetic variance that could be captured by the genome-wide genotypes likely explains why no associations to root length could be found using standard additive-model-based genome-wide association (GWA) approaches. However, as the broad-sense heritability for root length was significantly larger, and primarily due to epistasis, we also performed an epistatic GWA analysis to map loci contributing to the epistatic genetic variance. Four interacting pairs of loci were revealed, involving seven chromosomal loci that passed a standard multiple-testing corrected significance threshold. The genotype-phenotype maps for these pairs revealed epistasis that cancelled out the additive genetic variance, explaining why these loci were not detected in the additive GWA analysis. Small population sizes, such as in our experiment, increase the risk of identifying false epistatic interactions due to testing for associations with very large numbers of multi-marker genotypes in few phenotyped individuals. Therefore, we estimated the false-positive risk using a new statistical approach that suggested half of the associated pairs to be true positive associations. Our experimental evaluation of candidate genes within the seven associated loci suggests that this estimate is conservative; we identified functional candidate genes that affected root development in four loci that were part of three of the pairs. The statistical epistatic analyses were thus indispensable for confirming known, and identifying new, candidate genes for root length in this population of wild-collected A. thaliana accessions. We also illustrate how epistatic cancellation of the additive genetic variance explains the insignificant narrow-sense and significant broad-sense heritability by using a combination of careful statistical epistatic analyses and functional genetic experiments.
Sun, Chuanyu; VanRaden, Paul M.; Cole, John B.; O'Connell, Jeffrey R.
2014-01-01
Dominance may be an important source of non-additive genetic variance for many traits of dairy cattle. However, nearly all prediction models for dairy cattle have included only additive effects because of the limited number of cows with both genotypes and phenotypes. The role of dominance in the Holstein and Jersey breeds was investigated for eight traits: milk, fat, and protein yields; productive life; daughter pregnancy rate; somatic cell score; fat percent and protein percent. Additive and dominance variance components were estimated and then used to estimate additive and dominance effects of single nucleotide polymorphisms (SNPs). The predictive abilities of three models with both additive and dominance effects and a model with additive effects only were assessed using ten-fold cross-validation. One procedure estimated dominance values, and another estimated dominance deviations; calculation of the dominance relationship matrix was different for the two methods. The third approach enlarged the dataset by including cows with genotype probabilities derived using genotyped ancestors. For yield traits, dominance variance accounted for 5 and 7% of total variance for Holsteins and Jerseys, respectively; using dominance deviations resulted in smaller dominance and larger additive variance estimates. For non-yield traits, dominance variances were very small for both breeds. For yield traits, including additive and dominance effects fit the data better than including only additive effects; average correlations between estimated genetic effects and phenotypes showed that prediction accuracy increased when both effects rather than just additive effects were included. No corresponding gains in prediction ability were found for non-yield traits. Including cows with derived genotype probabilities from genotyped ancestors did not improve prediction accuracy. The largest additive effects were located on chromosome 14 near DGAT1 for yield traits for both breeds; those SNPs also showed the largest dominance effects for fat yield (both breeds) as well as for Holstein milk yield. PMID:25084281
Dimensionality and noise in energy selective x-ray imaging
Alvarez, Robert E.
2013-01-01
Purpose: To develop and test a method to quantify the effect of dimensionality on the noise in energy selective x-ray imaging. Methods: The Cramèr-Rao lower bound (CRLB), a universal lower limit of the covariance of any unbiased estimator, is used to quantify the noise. It is shown that increasing dimensionality always increases, or at best leaves the same, the variance. An analytic formula for the increase in variance in an energy selective x-ray system is derived. The formula is used to gain insight into the dependence of the increase in variance on the properties of the additional basis functions, the measurement noise covariance, and the source spectrum. The formula is also used with computer simulations to quantify the dependence of the additional variance on these factors. Simulated images of an object with three materials are used to demonstrate the trade-off of increased information with dimensionality and noise. The images are computed from energy selective data with a maximum likelihood estimator. Results: The increase in variance depends most importantly on the dimension and on the properties of the additional basis functions. With the attenuation coefficients of cortical bone, soft tissue, and adipose tissue as the basis functions, the increase in variance of the bone component from two to three dimensions is 1.4 × 103. With the soft tissue component, it is 2.7 × 104. If the attenuation coefficient of a high atomic number contrast agent is used as the third basis function, there is only a slight increase in the variance from two to three basis functions, 1.03 and 7.4 for the bone and soft tissue components, respectively. The changes in spectrum shape with beam hardening also have a substantial effect. They increase the variance by a factor of approximately 200 for the bone component and 220 for the soft tissue component as the soft tissue object thickness increases from 1 to 30 cm. Decreasing the energy resolution of the detectors increases the variance of the bone component markedly with three dimension processing, approximately a factor of 25 as the resolution decreases from 100 to 3 bins. The increase with two dimension processing for adipose tissue is a factor of two and with the contrast agent as the third material for two or three dimensions is also a factor of two for both components. The simulated images show that a maximum likelihood estimator can be used to process energy selective x-ray data to produce images with noise close to the CRLB. Conclusions: The method presented can be used to compute the effects of the object attenuation coefficients and the x-ray system properties on the relationship of dimensionality and noise in energy selective x-ray imaging systems. PMID:24320442
Use of inequality constrained least squares estimation in small area estimation
NASA Astrophysics Data System (ADS)
Abeygunawardana, R. A. B.; Wickremasinghe, W. N.
2017-05-01
Traditional surveys provide estimates that are based only on the sample observations collected for the population characteristic of interest. However, these estimates may have unacceptably large variance for certain domains. Small Area Estimation (SAE) deals with determining precise and accurate estimates for population characteristics of interest for such domains. SAE usually uses least squares or maximum likelihood procedures incorporating prior information and current survey data. Many available methods in SAE use constraints in equality form. However there are practical situations where certain inequality restrictions on model parameters are more realistic. It will lead to Inequality Constrained Least Squares (ICLS) estimates if the method used is least squares. In this study ICLS estimation procedure is applied to many proposed small area estimates.
Weak Lensing by Large-Scale Structure: A Dark Matter Halo Approach.
Cooray; Hu; Miralda-Escudé
2000-05-20
Weak gravitational lensing observations probe the spectrum and evolution of density fluctuations and the cosmological parameters that govern them, but they are currently limited to small fields and subject to selection biases. We show how the expected signal from large-scale structure arises from the contributions from and correlations between individual halos. We determine the convergence power spectrum as a function of the maximum halo mass and so provide the means to interpret results from surveys that lack high-mass halos either through selection criteria or small fields. Since shot noise from rare massive halos is mainly responsible for the sample variance below 10&arcmin;, our method should aid our ability to extract cosmological information from small fields.
Are the Stress Drops of Small Earthquakes Good Predictors of the Stress Drops of Larger Earthquakes?
NASA Astrophysics Data System (ADS)
Hardebeck, J.
2017-12-01
Uncertainty in PSHA could be reduced through better estimates of stress drop for possible future large earthquakes. Studies of small earthquakes find spatial variability in stress drop; if large earthquakes have similar spatial patterns, their stress drops may be better predicted using the stress drops of small local events. This regionalization implies the variance with respect to the local mean stress drop may be smaller than the variance with respect to the global mean. I test this idea using the Shearer et al. (2006) stress drop catalog for M1.5-3.1 events in southern California. I apply quality control (Hauksson, 2015) and remove near-field aftershocks (Wooddell & Abrahamson, 2014). The standard deviation of the distribution of the log10 stress drop is reduced from 0.45 (factor of 3) to 0.31 (factor of 2) by normalizing each event's stress drop by the local mean. I explore whether a similar variance reduction is possible when using the Shearer catalog to predict stress drops of larger southern California events. For catalogs of moderate-sized events (e.g. Kanamori, 1993; Mayeda & Walter, 1996; Boyd, 2017), normalizing by the Shearer catalog's local mean stress drop does not reduce the standard deviation compared to the unmodified stress drops. I compile stress drops of larger events from the literature, and identify 15 M5.5-7.5 earthquakes with at least three estimates. Because of the wide range of stress drop estimates for each event, and the different techniques and assumptions, it is difficult to assign a single stress drop value to each event. Instead, I compare the distributions of stress drop estimates for pairs of events, and test whether the means of the distributions are statistically significantly different. The events divide into 3 categories: low, medium, and high stress drop, with significant differences in mean stress drop between events in the low and the high stress drop categories. I test whether the spatial patterns of the Shearer catalog stress drops can predict the categories of the 15 events. I find that they cannot, rather the large event stress drops are uncorrelated with the local mean stress drop from the Shearer catalog. These results imply that the regionalization of stress drops of small events does not extend to the larger events, at least with current standard techniques of stress drop estimation.
ERIC Educational Resources Information Center
O'Neil, Maya Elin; McWhirter, Ellen Hawley; Cerezo, Alison
2008-01-01
Effective practices for career counseling with gender variant individuals have yet to be identified for reasons that may include perceptions that the population is too small to warrant in-depth research, lack of funding for such efforts, and practitioners' lack of training and experience with transgender concerns. In this article, we describe the…
Taylor’s Law of Temporal Fluctuation Scaling in Stock Illiquidity
NASA Astrophysics Data System (ADS)
Cai, Qing; Xu, Hai-Chuan; Zhou, Wei-Xing
2016-08-01
Taylor’s law of temporal fluctuation scaling, variance ˜ a(mean)b, is ubiquitous in natural and social sciences. We report for the first time convincing evidence of a solid temporal fluctuation scaling law in stock illiquidity by investigating the mean-variance relationship of the high-frequency illiquidity of almost all stocks traded on the Shanghai Stock Exchange (SHSE) and the Shenzhen Stock Exchange (SZSE) during the period from 1999 to 2011. Taylor’s law holds for A-share markets (SZSE Main Board, SZSE Small & Mediate Enterprise Board, SZSE Second Board, and SHSE Main Board) and B-share markets (SZSE B-share and SHSE B-share). We find that the scaling exponent b is greater than 2 for the A-share markets and less than 2 for the B-share markets. We further unveil that Taylor’s law holds for stocks in 17 industry categories, in 28 industrial sectors and in 31 provinces and direct-controlled municipalities with the majority of scaling exponents b ∈ (2, 3). We also investigate the Δt-min illiquidity and find that the scaling exponent b(Δt) increases logarithmically for small Δt values and decreases fast to a stable level.
Bias correction for estimated QTL effects using the penalized maximum likelihood method.
Zhang, J; Yue, C; Zhang, Y-M
2012-04-01
A penalized maximum likelihood method has been proposed as an important approach to the detection of epistatic quantitative trait loci (QTL). However, this approach is not optimal in two special situations: (1) closely linked QTL with effects in opposite directions and (2) small-effect QTL, because the method produces downwardly biased estimates of QTL effects. The present study aims to correct the bias by using correction coefficients and shifting from the use of a uniform prior on the variance parameter of a QTL effect to that of a scaled inverse chi-square prior. The results of Monte Carlo simulation experiments show that the improved method increases the power from 25 to 88% in the detection of two closely linked QTL of equal size in opposite directions and from 60 to 80% in the identification of QTL with small effects (0.5% of the total phenotypic variance). We used the improved method to detect QTL responsible for the barley kernel weight trait using 145 doubled haploid lines developed in the North American Barley Genome Mapping Project. Application of the proposed method to other shrinkage estimation of QTL effects is discussed.
Measurement error in epidemiologic studies of air pollution based on land-use regression models.
Basagaña, Xavier; Aguilera, Inmaculada; Rivera, Marcela; Agis, David; Foraster, Maria; Marrugat, Jaume; Elosua, Roberto; Künzli, Nino
2013-10-15
Land-use regression (LUR) models are increasingly used to estimate air pollution exposure in epidemiologic studies. These models use air pollution measurements taken at a small set of locations and modeling based on geographical covariates for which data are available at all study participant locations. The process of LUR model development commonly includes a variable selection procedure. When LUR model predictions are used as explanatory variables in a model for a health outcome, measurement error can lead to bias of the regression coefficients and to inflation of their variance. In previous studies dealing with spatial predictions of air pollution, bias was shown to be small while most of the effect of measurement error was on the variance. In this study, we show that in realistic cases where LUR models are applied to health data, bias in health-effect estimates can be substantial. This bias depends on the number of air pollution measurement sites, the number of available predictors for model selection, and the amount of explainable variability in the true exposure. These results should be taken into account when interpreting health effects from studies that used LUR models.
Lung vasculature imaging using speckle variance optical coherence tomography
NASA Astrophysics Data System (ADS)
Cua, Michelle; Lee, Anthony M. D.; Lane, Pierre M.; McWilliams, Annette; Shaipanich, Tawimas; MacAulay, Calum E.; Yang, Victor X. D.; Lam, Stephen
2012-02-01
Architectural changes in and remodeling of the bronchial and pulmonary vasculature are important pathways in diseases such as asthma, chronic obstructive pulmonary disease (COPD), and lung cancer. However, there is a lack of methods that can find and examine small bronchial vasculature in vivo. Structural lung airway imaging using optical coherence tomography (OCT) has previously been shown to be of great utility in examining bronchial lesions during lung cancer screening under the guidance of autofluorescence bronchoscopy. Using a fiber optic endoscopic OCT probe, we acquire OCT images from in vivo human subjects. The side-looking, circumferentially-scanning probe is inserted down the instrument channel of a standard bronchoscope and manually guided to the imaging location. Multiple images are collected with the probe spinning proximally at 100Hz. Due to friction, the distal end of the probe does not spin perfectly synchronous with the proximal end, resulting in non-uniform rotational distortion (NURD) of the images. First, we apply a correction algorithm to remove NURD. We then use a speckle variance algorithm to identify vasculature. The initial data show a vascaulture density in small human airways similar to what would be expected.
NASA Technical Reports Server (NTRS)
Massa, D.
1980-01-01
This paper discusses systematic errors which arise from exclusive use of the MK system to determine reddening. It is found that implementation of uvby, beta photometry to refine the qualitative MK grid substantially reduces stellar mismatch error. A working definition of 'identical' ubvy, beta types is investigated and the relationship of uvby to B-V color excesses is determined. A comparison is also made of the hydrogen based uvby, beta types with the MK system based on He and metal lines. A small core correlated effective temperature luminosity error in the MK system for the early B stars is observed along with a breakdown of the MK luminosity criteria for the late B stars. The second part investigates the wavelength dependence of interstellar extinction in the ultraviolet wavelength range observed with the TD-1 satellite. In this study the sets of identical stars employed to find reddening are determined more precisely than in previous studies and consist only of normal, nonsupergiant stars. A multivariate analysis of variance techniques in an unbiased coordinate system is used for determining the wavelength dependence of reddening.
Prospects for discovering pulsars in future continuum surveys using variance imaging
NASA Astrophysics Data System (ADS)
Dai, S.; Johnston, S.; Hobbs, G.
2017-12-01
In our previous paper, we developed a formalism for computing variance images from standard, interferometric radio images containing time and frequency information. Variance imaging with future radio continuum surveys allows us to identify radio pulsars and serves as a complement to conventional pulsar searches that are most sensitive to strictly periodic signals. Here, we carry out simulations to predict the number of pulsars that we can uncover with variance imaging in future continuum surveys. We show that the Australian SKA Pathfinder (ASKAP) Evolutionary Map of the Universe (EMU) survey can find ∼30 normal pulsars and ∼40 millisecond pulsars (MSPs) over and above the number known today, and similarly an all-sky continuum survey with SKA-MID can discover ∼140 normal pulsars and ∼110 MSPs with this technique. Variance imaging with EMU and SKA-MID will detect pulsars with large duty cycles and is therefore a potential tool for finding MSPs and pulsars in relativistic binary systems. Compared with current pulsar surveys at high Galactic latitudes in the Southern hemisphere, variance imaging with EMU and SKA-MID will be more sensitive, and will enable detection of pulsars with dispersion measures between ∼10 and 100 cm-3 pc.
Integrating Variances into an Analytical Database
NASA Technical Reports Server (NTRS)
Sanchez, Carlos
2010-01-01
For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.
Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method
NASA Astrophysics Data System (ADS)
Tsai, F. T. C.; Elshall, A. S.
2014-12-01
Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.
Family environment and its relation to adolescent personality factors.
Forman, S G; Forman, B D
1981-04-01
Investigated the relationship between family social climate characteristics and adolescent personality functioning. The High School Personality Questionnaire (HSPQ) was administered to 80 high school students. These students and their parents also completed the Family Environment Scale (FES). Results of a stepwise multiple regression analysis indicated that one or more HSPQ scales had significant associations with each FES scale. Significant variance in child behavior was attributed to family social system functioning; however, no single family variable accounted for a major portion of the variance to the exclusion of other factors. It was concluded that child behavior varies with total system functioning, more than with separate system factors.
Canivez, Gary L; Watkins, Marley W; Dombrowski, Stefan C
2017-04-01
The factor structure of the Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V; Wechsler, 2014a) standardization sample (N = 2,200) was examined using confirmatory factor analyses (CFA) with maximum likelihood estimation for all reported models from the WISC-V Technical and Interpretation Manual (Wechsler, 2014b). Additionally, alternative bifactor models were examined and variance estimates and model-based reliability estimates (ω coefficients) were provided. Results from analyses of the 16 primary and secondary WISC-V subtests found that all higher-order CFA models with 5 group factors (VC, VS, FR, WM, and PS) produced model specification errors where the Fluid Reasoning factor produced negative variance and were thus judged inadequate. Of the 16 models tested, the bifactor model containing 4 group factors (VC, PR, WM, and PS) produced the best fit. Results from analyses of the 10 primary WISC-V subtests also found the bifactor model with 4 group factors (VC, PR, WM, and PS) produced the best fit. Variance estimates from both 16 and 10 subtest based bifactor models found dominance of general intelligence (g) in accounting for subtest variance (except for PS subtests) and large ω-hierarchical coefficients supporting general intelligence interpretation. The small portions of variance uniquely captured by the 4 group factors and low ω-hierarchical subscale coefficients likely render the group factors of questionable interpretive value independent of g (except perhaps for PS). Present CFA results confirm the EFA results reported by Canivez, Watkins, and Dombrowski (2015); Dombrowski, Canivez, Watkins, and Beaujean (2015); and Canivez, Dombrowski, and Watkins (2015). (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Multi-ray-based system matrix generation for 3D PET reconstruction
NASA Astrophysics Data System (ADS)
Moehrs, Sascha; Defrise, Michel; Belcari, Nicola; DelGuerra, Alberto; Bartoli, Antonietta; Fabbri, Serena; Zanetti, Gianluigi
2008-12-01
Iterative image reconstruction algorithms for positron emission tomography (PET) require a sophisticated system matrix (model) of the scanner. Our aim is to set up such a model offline for the YAP-(S)PET II small animal imaging tomograph in order to use it subsequently with standard ML-EM (maximum-likelihood expectation maximization) and OSEM (ordered subset expectation maximization) for fully three-dimensional image reconstruction. In general, the system model can be obtained analytically, via measurements or via Monte Carlo simulations. In this paper, we present the multi-ray method, which can be considered as a hybrid method to set up the system model offline. It incorporates accurate analytical (geometric) considerations as well as crystal depth and crystal scatter effects. At the same time, it has the potential to model seamlessly other physical aspects such as the positron range. The proposed method is based on multiple rays which are traced from/to the detector crystals through the image volume. Such a ray-tracing approach itself is not new; however, we derive a novel mathematical formulation of the approach and investigate the positioning of the integration (ray-end) points. First, we study single system matrix entries and show that the positioning and weighting of the ray-end points according to Gaussian integration give better results compared to equally spaced integration points (trapezoidal integration), especially if only a small number of integration points (rays) are used. Additionally, we show that, for a given variance of the single matrix entries, the number of rays (events) required to calculate the whole matrix is a factor of 20 larger when using a pure Monte-Carlo-based method. Finally, we analyse the quality of the model by reconstructing phantom data from the YAP-(S)PET II scanner.
Entropy as a measure of diffusion
NASA Astrophysics Data System (ADS)
Aghamohammadi, Amir; Fatollahi, Amir H.; Khorrami, Mohammad; Shariati, Ahmad
2013-10-01
The time variation of entropy, as an alternative to the variance, is proposed as a measure of the diffusion rate. It is shown that for linear and time-translationally invariant systems having a large-time limit for the density, at large times the entropy tends exponentially to a constant. For systems with no stationary density, at large times the entropy is logarithmic with a coefficient specifying the speed of the diffusion. As an example, the large-time behaviors of the entropy and the variance are compared for various types of fractional-derivative diffusions.
Briat, Corentin; Gupta, Ankit; Khammash, Mustafa
2018-06-01
The ability of a cell to regulate and adapt its internal state in response to unpredictable environmental changes is called homeostasis and this ability is crucial for the cell's survival and proper functioning. Understanding how cells can achieve homeostasis, despite the intrinsic noise or randomness in their dynamics, is fundamentally important for both systems and synthetic biology. In this context, a significant development is the proposed antithetic integral feedback (AIF) motif, which is found in natural systems, and is known to ensure robust perfect adaptation for the mean dynamics of a given molecular species involved in a complex stochastic biomolecular reaction network. From the standpoint of applications, one drawback of this motif is that it often leads to an increased cell-to-cell heterogeneity or variance when compared to a constitutive (i.e. open-loop) control strategy. Our goal in this paper is to show that this performance deterioration can be countered by combining the AIF motif and a negative feedback strategy. Using a tailored moment closure method, we derive approximate expressions for the stationary variance for the controlled network that demonstrate that increasing the strength of the negative feedback can indeed decrease the variance, sometimes even below its constitutive level. Numerical results verify the accuracy of these results and we illustrate them by considering three biomolecular networks with two types of negative feedback strategies. Our computational analysis indicates that there is a trade-off between the speed of the settling-time of the mean trajectories and the stationary variance of the controlled species; i.e. smaller variance is associated with larger settling-time. © 2018 The Author(s).
Turner, Rebecca M; Davey, Jonathan; Clarke, Mike J; Thompson, Simon G; Higgins, Julian PT
2012-01-01
Background Many meta-analyses contain only a small number of studies, which makes it difficult to estimate the extent of between-study heterogeneity. Bayesian meta-analysis allows incorporation of external evidence on heterogeneity, and offers advantages over conventional random-effects meta-analysis. To assist in this, we provide empirical evidence on the likely extent of heterogeneity in particular areas of health care. Methods Our analyses included 14 886 meta-analyses from the Cochrane Database of Systematic Reviews. We classified each meta-analysis according to the type of outcome, type of intervention comparison and medical specialty. By modelling the study data from all meta-analyses simultaneously, using the log odds ratio scale, we investigated the impact of meta-analysis characteristics on the underlying between-study heterogeneity variance. Predictive distributions were obtained for the heterogeneity expected in future meta-analyses. Results Between-study heterogeneity variances for meta-analyses in which the outcome was all-cause mortality were found to be on average 17% (95% CI 10–26) of variances for other outcomes. In meta-analyses comparing two active pharmacological interventions, heterogeneity was on average 75% (95% CI 58–95) of variances for non-pharmacological interventions. Meta-analysis size was found to have only a small effect on heterogeneity. Predictive distributions are presented for nine different settings, defined by type of outcome and type of intervention comparison. For example, for a planned meta-analysis comparing a pharmacological intervention against placebo or control with a subjectively measured outcome, the predictive distribution for heterogeneity is a log-normal (−2.13, 1.582) distribution, which has a median value of 0.12. In an example of meta-analysis of six studies, incorporating external evidence led to a smaller heterogeneity estimate and a narrower confidence interval for the combined intervention effect. Conclusions Meta-analysis characteristics were strongly associated with the degree of between-study heterogeneity, and predictive distributions for heterogeneity differed substantially across settings. The informative priors provided will be very beneficial in future meta-analyses including few studies. PMID:22461129
Least-squares dual characterization for ROI assessment in emission tomography
NASA Astrophysics Data System (ADS)
Ben Bouallègue, F.; Crouzet, J. F.; Dubois, A.; Buvat, I.; Mariano-Goulart, D.
2013-06-01
Our aim is to describe an original method for estimating the statistical properties of regions of interest (ROIs) in emission tomography. Drawn upon the works of Louis on the approximate inverse, we propose a dual formulation of the ROI estimation problem to derive the ROI activity and variance directly from the measured data without any image reconstruction. The method requires the definition of an ROI characteristic function that can be extracted from a co-registered morphological image. This characteristic function can be smoothed to optimize the resolution-variance tradeoff. An iterative procedure is detailed for the solution of the dual problem in the least-squares sense (least-squares dual (LSD) characterization), and a linear extrapolation scheme is described to compensate for sampling partial volume effect and reduce the estimation bias (LSD-ex). LSD and LSD-ex are compared with classical ROI estimation using pixel summation after image reconstruction and with Huesman's method. For this comparison, we used Monte Carlo simulations (GATE simulation tool) of 2D PET data of a Hoffman brain phantom containing three small uniform high-contrast ROIs and a large non-uniform low-contrast ROI. Our results show that the performances of LSD characterization are at least as good as those of the classical methods in terms of root mean square (RMS) error. For the three small tumor regions, LSD-ex allows a reduction in the estimation bias by up to 14%, resulting in a reduction in the RMS error of up to 8.5%, compared with the optimal classical estimation. For the large non-specific region, LSD using appropriate smoothing could intuitively and efficiently handle the resolution-variance tradeoff.
Tinker, M. Tim; Estes, James A.; Staedler, Michelle; Bodkin, James L.; Tinker, M. Tim; Estes, James A.; Ralls, Katherine; Williams, Terrie M.; Jessup, David A.; Costa, Daniel P.
2006-01-01
Longitudinal foraging data collected from 60 sea otters implanted with VHF radio transmitters at two study sites in Central California over a three-year period demonstrated even greater individual dietary specialization than in previous studies, with only 54% dietary overlap between individuals and the population.Multivariate statistical analyses indicated that individual diets could be grouped into three general "diet types" representing distinct foraging specializations. Type 1 specialists consumed large size prey but had low dive efficiency, Type 2 specialists consumed small to medium size prey with high dive efficiency, and Type 3 specialists consumed very small prey (mainly snails) with very high dive efficiency.The mean rate of energy gain for the population as a whole was low when compared to other sea otter populations in Alaska but showed a high degree of within- and betweenindividual variation, much of which was accounted for by the three foraging strategies. Type 1 specialists had the highest mean energy gain but also the highest withinindividual variance in energy gain. Type 2 specialists had the lowest mean energy gain but also the lowest variance. Type 3 specialists had an intermediate mean and variance. All three strategies resulted in very similar probabilities of exceeding a critical rate of energy gain on any given day.Correlational selection may help maintain multiple foraging strategies in the population: a fitness surface (using mean rate of energy gain as a proxy for fitness) fit to the first two principal components of foraging behavior suggested that the three foraging strategies occupy separate fitness peaks.Food limitation is likely an important ultimate factor restricting population growth in the center of the population’s range in California, although the existence of alternative foraging strategies results in different impacts of food limitation on individuals and thus may obscure expected patterns of density dependence.
Statistics of spatial derivatives of velocity and pressure in turbulent channel flow
NASA Astrophysics Data System (ADS)
Vreman, A. W.; Kuerten, J. G. M.
2014-08-01
Statistical profiles of the first- and second-order spatial derivatives of velocity and pressure are reported for turbulent channel flow at Reτ = 590. The statistics were extracted from a high-resolution direct numerical simulation. To quantify the anisotropic behavior of fine-scale structures, the variances of the derivatives are compared with the theoretical values for isotropic turbulence. It is shown that appropriate combinations of first- and second-order velocity derivatives lead to (directional) viscous length scales without explicit occurrence of the viscosity in the definitions. To quantify the non-Gaussian and intermittent behavior of fine-scale structures, higher-order moments and probability density functions of spatial derivatives are reported. Absolute skewnesses and flatnesses of several spatial derivatives display high peaks in the near wall region. In the logarithmic and central regions of the channel flow, all first-order derivatives appear to be significantly more intermittent than in isotropic turbulence at the same Taylor Reynolds number. Since the nine variances of first-order velocity derivatives are the distinct elements of the turbulence dissipation, the budgets of these nine variances are shown, together with the budget of the turbulence dissipation. The comparison of the budgets in the near-wall region indicates that the normal derivative of the fluctuating streamwise velocity (∂u'/∂y) plays a more important role than other components of the fluctuating velocity gradient. The small-scale generation term formed by triple correlations of fluctuations of first-order velocity derivatives is analyzed. A typical mechanism of small-scale generation near the wall (around y+ = 1), the intensification of positive ∂u'/∂y by local strain fluctuation (compression in normal and stretching in spanwise direction), is illustrated and discussed.
Effects of sample size on estimates of population growth rates calculated with matrix models.
Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M
2008-08-28
Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.
NASA Astrophysics Data System (ADS)
Chatzidakis, S.; Choi, C. K.; Tsoukalas, L. H.
2016-08-01
The potential non-proliferation monitoring of spent nuclear fuel sealed in dry casks interacting continuously with the naturally generated cosmic ray muons is investigated. Treatments on the muon RMS scattering angle by Moliere, Rossi-Greisen, Highland and, Lynch-Dahl were analyzed and compared with simplified Monte Carlo simulations. The Lynch-Dahl expression has the lowest error and appears to be appropriate when performing conceptual calculations for high-Z, thick targets such as dry casks. The GEANT4 Monte Carlo code was used to simulate dry casks with various fuel loadings and scattering variance estimates for each case were obtained. The scattering variance estimation was shown to be unbiased and using Chebyshev's inequality, it was found that 106 muons will provide estimates of the scattering variances that are within 1% of the true value at a 99% confidence level. These estimates were used as reference values to calculate scattering distributions and evaluate the asymptotic behavior for small variations on fuel loading. It is shown that the scattering distributions between a fully loaded dry cask and one with a fuel assembly missing initially overlap significantly but their distance eventually increases with increasing number of muons. One missing fuel assembly can be distinguished from a fully loaded cask with a small overlapping between the distributions which is the case of 100,000 muons. This indicates that the removal of a standard fuel assembly can be identified using muons providing that enough muons are collected. A Bayesian algorithm was developed to classify dry casks and provide a decision rule that minimizes the risk of making an incorrect decision. The algorithm performance was evaluated and the lower detection limit was determined.
View-angle-dependent AIRS Cloudiness and Radiance Variance: Analysis and Interpretation
NASA Technical Reports Server (NTRS)
Gong, Jie; Wu, Dong L.
2013-01-01
Upper tropospheric clouds play an important role in the global energy budget and hydrological cycle. Significant view-angle asymmetry has been observed in upper-level tropical clouds derived from eight years of Atmospheric Infrared Sounder (AIRS) 15 um radiances. Here, we find that the asymmetry also exists in the extra-tropics. It is larger during day than that during night, more prominent near elevated terrain, and closely associated with deep convection and wind shear. The cloud radiance variance, a proxy for cloud inhomogeneity, has consistent characteristics of the asymmetry to those in the AIRS cloudiness. The leading causes of the view-dependent cloudiness asymmetry are the local time difference and small-scale organized cloud structures. The local time difference (1-1.5 hr) of upper-level (UL) clouds between two AIRS outermost views can create parts of the observed asymmetry. On the other hand, small-scale tilted and banded structures of the UL clouds can induce about half of the observed view-angle dependent differences in the AIRS cloud radiances and their variances. This estimate is inferred from analogous study using Microwave Humidity Sounder (MHS) radiances observed during the period of time when there were simultaneous measurements at two different view-angles from NOAA-18 and -19 satellites. The existence of tilted cloud structures and asymmetric 15 um and 6.7 um cloud radiances implies that cloud statistics would be view-angle dependent, and should be taken into account in radiative transfer calculations, measurement uncertainty evaluations and cloud climatology investigations. In addition, the momentum forcing in the upper troposphere from tilted clouds is also likely asymmetric, which can affect atmospheric circulation anisotropically.
Swarm based mean-variance mapping optimization (MVMOS) for solving economic dispatch
NASA Astrophysics Data System (ADS)
Khoa, T. H.; Vasant, P. M.; Singh, M. S. Balbir; Dieu, V. N.
2014-10-01
The economic dispatch (ED) is an essential optimization task in the power generation system. It is defined as the process of allocating the real power output of generation units to meet required load demand so as their total operating cost is minimized while satisfying all physical and operational constraints. This paper introduces a novel optimization which named as Swarm based Mean-variance mapping optimization (MVMOS). The technique is the extension of the original single particle mean-variance mapping optimization (MVMO). Its features make it potentially attractive algorithm for solving optimization problems. The proposed method is implemented for three test power systems, including 3, 13 and 20 thermal generation units with quadratic cost function and the obtained results are compared with many other methods available in the literature. Test results have indicated that the proposed method can efficiently implement for solving economic dispatch.
A Formula to Calculate Standard Liver Volume Using Thoracoabdominal Circumference.
Shaw, Brian I; Burdine, Lyle J; Braun, Hillary J; Ascher, Nancy L; Roberts, John P
2017-12-01
With the use of split liver grafts as well as living donor liver transplantation (LDLT) it is imperative to know the minimum graft volume to avoid complications. Most current formulas to predict standard liver volume (SLV) rely on weight-based measures that are likely inaccurate in the setting of cirrhosis. Therefore, we sought to create a formula for estimating SLV without weight-based covariates. LDLT donors underwent computed tomography scan volumetric evaluation of their livers. An optimal formula for calculating SLV using the anthropomorphic measure thoracoabdominal circumference (TAC) was determined using leave-one-out cross-validation. The ability of this formula to correctly predict liver volume was checked against other existing formulas by analysis of variance. The ability of the formula to predict small grafts in LDLT was evaluated by exact logistic regression. The optimal formula using TAC was determined to be SLV = (TAC × 3.5816) - (Age × 3.9844) - (Sex × 109.7386) - 934.5949. When compared to historic formulas, the current formula was the only one which was not significantly different than computed tomography determined liver volumes when compared by analysis of variance with Dunnett posttest. When evaluating the ability of the formula to predict small for size syndrome, many (10/16) of the formulas tested had significant results by exact logistic regression, with our formula predicting small for size syndrome with an odds ratio of 7.94 (95% confidence interval, 1.23-91.36; P = 0.025). We report a formula for calculating SLV that does not rely on weight-based variables that has good ability to predict SLV and identify patients with potentially small grafts.
Mindfulness-based interventions with youth: A comprehensive meta-analysis of group-design studies.
Klingbeil, David A; Renshaw, Tyler L; Willenbrink, Jessica B; Copek, Rebecca A; Chan, Kai Tai; Haddock, Aaron; Yassine, Jordan; Clifton, Jesse
2017-08-01
The treatment effects of Mindfulness-Based Interventions (MBIs) with youth were synthesized from 76 studies involving 6121 participants. A total of 885 effect sizes were aggregated using meta-regression with robust variance estimation. Overall, MBIs were associated with small treatment effects in studies using pre-post (g=0.305, SE=0.039) and controlled designs (g=0.322, SE=0.040). Treatment effects were measured after a follow-up period in 24 studies (n=1963). Results demonstrated that treatment effects were larger at follow-up than post-treatment in pre-post (g=0.462, SE=0.118) and controlled designs (g=0.402, SE=0.081). Moderator analyses indicated that intervention setting and intervention dosage were not meaningfully related to outcomes after controlling for study design quality. With that said, the between-study heterogeneity in the intercept-only models was consistently small, thus limiting the amount of variance for the moderators to explain. A series of exploratory analyses were used to investigate the differential effectiveness of MBIs across four therapeutic process domains and seven therapeutic outcome domains. Small, positive results were generally observed across the process and outcome domains. Notably, MBIs were associated with moderate effects on the process variable of mindfulness in controlled studies (n=1108, g=0.510). Limitations and directions for future research and practice are discussed. Copyright © 2017 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Linzarini, Adriano; Dollfus, Sonia; Etard, Olivier; Orliac, François; Houdé, Olivier
2018-01-01
Abstract Inhibitory control (IC) is a core executive function that enables humans to resist habits, temptations, or distractions. IC efficiency in childhood is a strong predictor of academic and professional success later in life. Based on analysis of the sulcal pattern, a qualitative feature of cortex anatomy determined during fetal life and stable during development, we searched for evidence that interindividual differences in IC partly trace back to prenatal processes. Using anatomical magnetic resonance imaging (MRI), we analyzed the sulcal pattern of two key regions of the IC neural network, the dorsal anterior cingulate cortex (ACC) and the inferior frontal cortex (IFC), which limits the inferior frontal gyrus. We found that the sulcal pattern asymmetry of both the ACC and IFC contributes to IC (Stroop score) in children and adults: participants with asymmetrical ACC or IFC sulcal patterns had better IC efficiency than participants with symmetrical ACC or IFC sulcal patterns. Such additive effects of IFC and ACC sulcal patterns on IC efficiency suggest that distinct early neurodevelopmental mechanisms targeting different brain regions likely contribute to IC efficiency. This view shares some analogies with the “common variant–small effect” model in genetics, which states that frequent genetic polymorphisms have small effects but collectively account for a large portion of the variance. Similarly, each sulcal polymorphism has a small but additive effect: IFC and ACC sulcal patterns, respectively, explained 3% and 14% of the variance of the Stroop interference scores. PMID:29527565
Tissier, Cloélia; Linzarini, Adriano; Allaire-Duquette, Geneviève; Mevel, Katell; Poirel, Nicolas; Dollfus, Sonia; Etard, Olivier; Orliac, François; Peyrin, Carole; Charron, Sylvain; Raznahan, Armin; Houdé, Olivier; Borst, Grégoire; Cachia, Arnaud
2018-01-01
Inhibitory control (IC) is a core executive function that enables humans to resist habits, temptations, or distractions. IC efficiency in childhood is a strong predictor of academic and professional success later in life. Based on analysis of the sulcal pattern, a qualitative feature of cortex anatomy determined during fetal life and stable during development, we searched for evidence that interindividual differences in IC partly trace back to prenatal processes. Using anatomical magnetic resonance imaging (MRI), we analyzed the sulcal pattern of two key regions of the IC neural network, the dorsal anterior cingulate cortex (ACC) and the inferior frontal cortex (IFC), which limits the inferior frontal gyrus. We found that the sulcal pattern asymmetry of both the ACC and IFC contributes to IC (Stroop score) in children and adults: participants with asymmetrical ACC or IFC sulcal patterns had better IC efficiency than participants with symmetrical ACC or IFC sulcal patterns. Such additive effects of IFC and ACC sulcal patterns on IC efficiency suggest that distinct early neurodevelopmental mechanisms targeting different brain regions likely contribute to IC efficiency. This view shares some analogies with the "common variant-small effect" model in genetics, which states that frequent genetic polymorphisms have small effects but collectively account for a large portion of the variance. Similarly, each sulcal polymorphism has a small but additive effect: IFC and ACC sulcal patterns, respectively, explained 3% and 14% of the variance of the Stroop interference scores.
Origin and Consequences of the Relationship between Protein Mean and Variance
Vallania, Francesco Luigi Massimo; Sherman, Marc; Goodwin, Zane; Mogno, Ilaria; Cohen, Barak Alon; Mitra, Robi David
2014-01-01
Cell-to-cell variance in protein levels (noise) is a ubiquitous phenomenon that can increase fitness by generating phenotypic differences within clonal populations of cells. An important challenge is to identify the specific molecular events that control noise. This task is complicated by the strong dependence of a protein's cell-to-cell variance on its mean expression level through a power-law like relationship (σ2∝μ1.69). Here, we dissect the nature of this relationship using a stochastic model parameterized with experimentally measured values. This framework naturally recapitulates the power-law like relationship (σ2∝μ1.6) and accurately predicts protein variance across the yeast proteome (r2 = 0.935). Using this model we identified two distinct mechanisms by which protein variance can be increased. Variables that affect promoter activation, such as nucleosome positioning, increase protein variance by changing the exponent of the power-law relationship. In contrast, variables that affect processes downstream of promoter activation, such as mRNA and protein synthesis, increase protein variance in a mean-dependent manner following the power-law. We verified our findings experimentally using an inducible gene expression system in yeast. We conclude that the power-law-like relationship between noise and protein mean is due to the kinetics of promoter activation. Our results provide a framework for understanding how molecular processes shape stochastic variation across the genome. PMID:25062021
Observed spatiotemporal variability of boundary-layer turbulence over flat, heterogeneous terrain
NASA Astrophysics Data System (ADS)
Maurer, V.; Kalthoff, N.; Wieser, A.; Kohler, M.; Mauder, M.; Gantner, L.
2016-02-01
In the spring of 2013, extensive measurements with multiple Doppler lidar systems were performed. The instruments were arranged in a triangle with edge lengths of about 3 km in a moderately flat, agriculturally used terrain in northwestern Germany. For 6 mostly cloud-free convective days, vertical velocity variance profiles were calculated. Weighted-averaged surface fluxes proved to be more appropriate than data from individual sites for scaling the variance profiles; but even then, the scatter of profiles was mostly larger than the statistical error. The scatter could not be explained by mean wind speed or stability, whereas time periods with significantly increased variance contained broader thermals. Periods with an elevated maximum of the variance profiles could also be related to broad thermals. Moreover, statistically significant spatial differences of variance were found. They were not influenced by the existing surface heterogeneity. Instead, thermals were preserved between two sites when the travel time was shorter than the large-eddy turnover time. At the same time, no thermals passed for more than 2 h at a third site that was located perpendicular to the mean wind direction in relation to the first two sites. Organized structures of turbulence with subsidence prevailing in the surroundings of thermals can thus partly explain significant spatial variance differences existing for several hours. Therefore, the representativeness of individual variance profiles derived from measurements at a single site cannot be assumed.
Oregon ground-water quality and its relation to hydrogeological factors; a statistical approach
Miller, T.L.; Gonthier, J.B.
1984-01-01
An appraisal of Oregon ground-water quality was made using existing data accessible through the U.S. Geological Survey computer system. The data available for about 1,000 sites were separated by aquifer units and hydrologic units. Selected statistical moments were described for 19 constituents including major ions. About 96 percent of all sites in the data base were sampled only once. The sample data were classified by aquifer unit and hydrologic unit and analysis of variance was run to determine if significant differences exist between the units within each of these two classifications for the same 19 constituents on which statistical moments were determined. Results of the analysis of variance indicated both classification variables performed about the same, but aquifer unit did provide more separation for some constituents. Samples from the Rogue River basin were classified by location within the flow system and type of flow system. The samples were then analyzed using analysis of variance on 14 constituents to determine if there were significant differences between subsets classified by flow path. Results of this analysis were not definitive, but classification as to the type of flow system did indicate potential for segregating water-quality data into distinct subsets. (USGS)
NASA Astrophysics Data System (ADS)
Balch, W. M.; Poulton, A. J.; Drapeau, D. T.; Bowler, B. C.; Windecker, L. A.; Booth, E. S.
2011-03-01
Primary production (P prim) and calcification (C calc) were measured in the eastern and central Equatorial Pacific during December 2004 and September 2005, between 110°W and 140°W. The design of the field sampling allowed partitioning of P prim and total chlorophyll a (B) between large (>3 μm) and small (0.45-3 μm) phytoplankton cells. The station locations allowed discrimination of meridional and zonal patterns. The cruises coincided with a warm El Niño Southern Oscillation (ENSO) phase and ENSO-neutral phase, respectively, which proved to be the major factors relating to the patterns of productivity. Production and biomass of large phytoplankton generally covaried with that of small cells; large cells typically accounted for 20-30% of B and 20% of P prim. Elevated biomass and primary production of all size fractions were highest along the equator as well as at the convergence zone between the North Equatorial Counter Current and the South Equatorial Current. C calc by >0.4 μm cells was 2-3% of P prim by the same size fraction, for both cruises. Biomass-normalized P prim values were, on average, slightly higher during the warm-phase ENSO period, inconsistent with a "bottom-up" control mechanism (such as nutrient supply). Another source of variability along the equator was Tropical Instability Waves (TIWs). Zonal variance in integrated phytoplankton biomass (along the equator, between 110° and 140°) was almost the same as the meridional variance across it (between 4° N and 4° S). However, the zonal variance in integrated P prim was half the variance observed meridionally. The variance in integrated C calc along the equator was half that seen meridionally during the warm ENSO phase cruise whereas during the ENSO-neutral period, it was identical. No relation could be observed between the patterns of integrated carbon fixation (P prim or C calc) and integrated nutrients (nitrate, ammonium, silicate or dissolved iron). This suggests that the factors controlling integrated P prim or C calc are more complex than a simple bottom-up supply model and likely also will involve a top-down grazer-control component, as well. The carbon fixation within the Equatorial Pacific is well balanced with diatom and coccolithophore production contributing a relatively steady proportion of the total primary production. This maintains a steady balance between organic and inorganic production, relevant to the ballasting of organic matter and the export flux of carbon from this important upwelling region.
Attitude toward Christianity and paranormal belief among 13- to 16-yr.-old students.
Williams, Emyr; Francis, Leslie J; Robbins, Mandy
2006-08-01
A small but statistically significant positive correlation (r = .17) was found in a sample of 279 13- to 16-yr.-old students in Wales between scores on the Francis Scale of Attitude toward Christianity and on a new Index of Paranormal Belief. These data suggest that there is little common variance between attitude toward Christianity and belief in the paranormal.
An approach to the analysis of performance of quasi-optimum digital phase-locked loops.
NASA Technical Reports Server (NTRS)
Polk, D. R.; Gupta, S. C.
1973-01-01
An approach to the analysis of performance of quasi-optimum digital phase-locked loops (DPLL's) is presented. An expression for the characteristic function of the prior error in the state estimate is derived, and from this expression an infinite dimensional equation for the prior error variance is obtained. The prior error-variance equation is a function of the communication system model and the DPLL gain and is independent of the method used to derive the DPLL gain. Two approximations are discussed for reducing the prior error-variance equation to finite dimension. The effectiveness of one approximation in analyzing DPLL performance is studied.
Wang, Tao; He, Fuhong; Zhang, Anding; Gu, Lijuan; Wen, Yangmao; Jiang, Weiguo; Shao, Hongbo
2014-01-01
This paper took a subregion in a small watershed gully system at Beiyanzikou catchment of Qixia, China, as a study and, using object-orientated image analysis (OBIA), extracted shoulder line of gullies from high spatial resolution digital orthophoto map (DOM) aerial photographs. Next, it proposed an accuracy assessment method based on the adjacent distance between the boundary classified by remote sensing and points measured by RTK-GPS along the shoulder line of gullies. Finally, the original surface was fitted using linear regression in accordance with the elevation of two extracted edges of experimental gullies, named Gully 1 and Gully 2, and the erosion volume was calculated. The results indicate that OBIA can effectively extract information of gullies; average range difference between points field measured along the edge of gullies and classified boundary is 0.3166 m, with variance of 0.2116 m. The erosion area and volume of two gullies are 2141.6250 m(2), 5074.1790 m(3) and 1316.1250 m(2), 1591.5784 m(3), respectively. The results of the study provide a new method for the quantitative study of small gully erosion.
Fenley, Andrew T.; Muddana, Hari S.; Gilson, Michael K.
2012-01-01
Molecular dynamics simulations of unprecedented duration now can provide new insights into biomolecular mechanisms. Analysis of a 1-ms molecular dynamics simulation of the small protein bovine pancreatic trypsin inhibitor reveals that its main conformations have different thermodynamic profiles and that perturbation of a single geometric variable, such as a torsion angle or interresidue distance, can select for occupancy of one or another conformational state. These results establish the basis for a mechanism that we term entropy–enthalpy transduction (EET), in which the thermodynamic character of a local perturbation, such as enthalpic binding of a small molecule, is camouflaged by the thermodynamics of a global conformational change induced by the perturbation, such as a switch into a high-entropy conformational state. It is noted that EET could occur in many systems, making measured entropies and enthalpies of folding and binding unreliable indicators of actual thermodynamic driving forces. The same mechanism might also account for the high experimental variance of measured enthalpies and entropies relative to free energies in some calorimetric studies. Finally, EET may be the physical mechanism underlying many cases of entropy–enthalpy compensation. PMID:23150595
Miura, Naoki; Kucho, Ken-Ichi; Noguchi, Michiko; Miyoshi, Noriaki; Uchiumi, Toshiki; Kawaguchi, Hiroaki; Tanimoto, Akihide
2014-01-01
The microminipig, which weighs less than 10 kg at an early stage of maturity, has been reported as a potential experimental model animal. Its extremely small size and other distinct characteristics suggest the possibility of a number of differences between the genome of the microminipig and that of conventional pigs. In this study, we analyzed the genomes of two healthy microminipigs using a next-generation sequencer SOLiD™ system. We then compared the obtained genomic sequences with a genomic database for the domestic pig (Sus scrofa). The mapping coverage of sequenced tag from the microminipig to conventional pig genomic sequences was greater than 96% and we detected no clear, substantial genomic variance from these data. The results may indicate that the distinct characteristics of the microminipig derive from small-scale alterations in the genome, such as Single Nucleotide Polymorphisms or translational modifications, rather than large-scale deletion or insertion polymorphisms. Further investigation of the entire genomic sequence of the microminipig with methods enabling deeper coverage is required to elucidate the genetic basis of its distinct phenotypic traits. Copyright © 2014 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.
Rare Event Simulation in Radiation Transport
NASA Astrophysics Data System (ADS)
Kollman, Craig
This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved, even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiplied by the likelihood ratio between the true and simulated probabilities so as to keep our estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive "learning" algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give, with probability one, a sequence of estimates converging exponentially fast to the true solution. In the final chapter, an attempt to generalize this algorithm to a continuous state space is made. This involves partitioning the space into a finite number of cells. There is a tradeoff between additional computation per iteration and variance reduction per iteration that arises in determining the optimal grid size. All versions of this algorithm can be thought of as a compromise between deterministic and Monte Carlo methods, capturing advantages of both techniques.
Using variance structure to quantify responses to perturbation in fish catches
Vidal, Tiffany E.; Irwin, Brian J.; Wagner, Tyler; Rudstam, Lars G.; Jackson, James R.; Bence, James R.
2017-01-01
We present a case study evaluation of gill-net catches of Walleye Sander vitreus to assess potential effects of large-scale changes in Oneida Lake, New York, including the disruption of trophic interactions by double-crested cormorants Phalacrocorax auritus and invasive dreissenid mussels. We used the empirical long-term gill-net time series and a negative binomial linear mixed model to partition the variability in catches into spatial and coherent temporal variance components, hypothesizing that variance partitioning can help quantify spatiotemporal variability and determine whether variance structure differs before and after large-scale perturbations. We found that the mean catch and the total variability of catches decreased following perturbation but that not all sampling locations responded in a consistent manner. There was also evidence of some spatial homogenization concurrent with a restructuring of the relative productivity of individual sites. Specifically, offshore sites generally became more productive following the estimated break point in the gill-net time series. These results provide support for the idea that variance structure is responsive to large-scale perturbations; therefore, variance components have potential utility as statistical indicators of response to a changing environment more broadly. The modeling approach described herein is flexible and would be transferable to other systems and metrics. For example, variance partitioning could be used to examine responses to alternative management regimes, to compare variability across physiographic regions, and to describe differences among climate zones. Understanding how individual variance components respond to perturbation may yield finer-scale insights into ecological shifts than focusing on patterns in the mean responses or total variability alone.
Self-perception and value system as possible predictors of stress.
Sivberg, B
1998-03-01
This study was directed towards personality-related, value system and sociodemographic variables of nursing students in a situation of change, using a longitudinal perspective to measure their improvement in principle-based moral judgement (Kohlberg; Rest) as possible predictors of stress. Three subgroups of students were included from the commencement of the first three-year academic nursing programme in 1993. The students came from the colleges of health at Jönköping, Växjö and Kristianstad in the south of Sweden. A principal component factor analysis (varimax) was performed using data obtained from the students in the spring of 1994 (n = 122) and in the spring of 1996 (n = 112). There were 23 variables, of which two were sociodemographic, eight represented self-image, six were self-values, six were interpersonal values, and one was principle-based moral judgement. The analysis of data from students in the first year of a three-year programme demonstrated eight factors that explained 68.8% of the variance. The most important factors were: (1) ascendant decisive disorderly sociability and nonpractical mindedness (18.1% of the variance); (2) original vigour person-related trust (13.3%) of the variance); (3) orderly nonvigour achievement (8.9% of the variance) and (4) independent leadership (7.9% of the variance). (The term 'ascendancy' refers to self-confidence, and 'vigour' denotes responding well to challenges and coping with stress.) The analysis in 1996 demonstrated nine factors, of which the most important were: (1) ascendant original sociability with decisive nonconformist leadership (18.2% of the variance); (2) cautious person-related responsibility (12.6% of the variance); (3) orderly nonvariety achievement (8.4% of the variance); and (4) nonsupportive benevolent conformity (7.2% of the variance). A comparison of the two most prominent factors in 1994 and 1996 showed the process of change to be stronger for 18.2% and weaker for 30% of the variance. Principle-based moral judgement was measured in March 1994 and in May 1996, using the Swedish version of the Defining Issues Test and Index P. The result was that Index P for the students at Jönköping changed significantly (paired samples t-test) between 1994 and 1996 (p = 0.028), but that for the Växjö and Kristianstad students did not. The mean of Index P was 44.3% at Växjö, which was greater than the international average for college students (42.3%) it differed significantly in the spring of 1996 (independent samples t-test), but not in 1994, from the students at Jönköping (p = 0.032) and Kristianstad (p = 0.025). Index P was very heterogeneous for the group of students at Växjö, with the result that the paired samples t-test reached a value close to significance only. The conclusion of this study was that, if self-perception and value system are predictors of stress, only one-third of the students had improved their ability to cope with stress at the end of the programme. This article contains the author's application to the teaching process of reflecting on the structure of expectations in professional ethical relationships.
A Technique for Estimating the Surface Conductivity of Single Molecules
NASA Astrophysics Data System (ADS)
Bau, Haim; Arsenault, Mark; Zhao, Hui; Purohit, Prashant; Goldman, Yale
2007-11-01
When an AC electric field at 2MHz was applied across a small gap between two metal electrodes elevated above a surface, rhodamine-phalloidin-labeled actin filaments were attracted to the gap and became suspended between the two electrodes. The variance of each filament's horizontal, lateral displacement was measured as a function of electric field intensity and position along the filament. The variance significantly decreased as the electric field intensity increased. Hypothesizing that the electric field induces electroosmotic flow around the filament that, in turn, induces drag on the filament, which appears as effective tension, we estimated the tension using a linear, Brownian dynamic model. Based on the tension, we estimated the filament's surface conductivity. Our experimental method provides a novel means for trapping and manipulating biological filaments and for probing the surface conductance and mechanical properties of single polymers.
Gao, Zan
2008-10-01
This study investigated the predictive strength of perceived competence and enjoyment on students' physical activity and cardiorespiratory fitness in physical education classes. Participants (N = 307; 101 in Grade 6, 96 in Grade 7, 110 in Grade 8; 149 boys, 158 girls) responded to questionnaires assessing perceived competence and enjoyment of physical education, then their cardiorespiratory fitness was assessed on the Progressive Aerobic Cardiovascular Endurance Run (PACER) test. Physical activity in one class was estimated via pedometers. Regression analyses showed enjoyment (R2 = 16.5) and perceived competence (R2 = 4.2) accounted for significant variance of only 20.7% of physical activity and, perceived competence was the only significant contributor to cardiorespiratory fitness performance (R2 = 19.3%). Only a small amount of variance here leaves 80% unaccounted for. Some educational implications and areas for research are mentioned.
On the reliability of Shewhart-type control charts for multivariate process variability
NASA Astrophysics Data System (ADS)
Djauhari, Maman A.; Salleh, Rohayu Mohd; Zolkeply, Zunnaaim; Li, Lee Siaw
2017-05-01
We show that in the current practice of multivariate process variability monitoring, the reliability of Shewhart-type control charts cannot be measured except when the sub-group size n tends to infinity. However, the requirement of large n is meaningless not only in manufacturing industry where n is small but also in service industry where n is moderate. In this paper, we introduce a new definition of control limits in the two most appreciated control charts in the literature, i.e., the improved generalized variance chart (IGV-chart) and vector variance chart (VV-chart). With the new definition of control limits, the reliability of the control charts can be determined. Some important properties of new control limits will be derived and the computational technique of probability of false alarm will be delivered.
Consequences of Base Time for Redundant Signals Experiments
Townsend, James T.; Honey, Christopher
2007-01-01
We report analytical and computational investigations into the effects of base time on the diagnosticity of two popular theoretical tools in the redundant signals literature: (1) the race model inequality and (2) the capacity coefficient. We show analytically and without distributional assumptions that the presence of base time decreases the sensitivity of both of these measures to model violations. We further use simulations to investigate the statistical power model selection tools based on the race model inequality, both with and without base time. Base time decreases statistical power, and biases the race model test toward conservatism. The magnitude of this biasing effect increases as we increase the proportion of total reaction time variance contributed by base time. We marshal empirical evidence to suggest that the proportion of reaction time variance contributed by base time is relatively small, and that the effects of base time on the diagnosticity of our model-selection tools are therefore likely to be minor. However, uncertainty remains concerning the magnitude and even the definition of base time. Experimentalists should continue to be alert to situations in which base time may contribute a large proportion of the total reaction time variance. PMID:18670591
Gini estimation under infinite variance
NASA Astrophysics Data System (ADS)
Fontanari, Andrea; Taleb, Nassim Nicholas; Cirillo, Pasquale
2018-07-01
We study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α ∈(1 , 2)). We show that, in such a case, the Gini coefficient cannot be reliably estimated using conventional nonparametric methods, because of a downward bias that emerges under fat tails. This has important implications for the ongoing discussion about economic inequality. We start by discussing how the nonparametric estimator of the Gini index undergoes a phase transition in the symmetry structure of its asymptotic distribution, as the data distribution shifts from the domain of attraction of a light-tailed distribution to that of a fat-tailed one, especially in the case of infinite variance. We also show how the nonparametric Gini bias increases with lower values of α. We then prove that maximum likelihood estimation outperforms nonparametric methods, requiring a much smaller sample size to reach efficiency. Finally, for fat-tailed data, we provide a simple correction mechanism to the small sample bias of the nonparametric estimator based on the distance between the mode and the mean of its asymptotic distribution.
Urbán, Róbert; Szigeti, Réka; Kökönyei, Gyöngyi; Demetrovics, Zsolt
2014-06-01
The Rosenberg Self-Esteem Scale (RSES) is a widely used measure for assessing self-esteem, but its factor structure is debated. Our goals were to compare 10 alternative models for the RSES and to quantify and predict the method effects. This sample involves two waves (N =2,513 9th-grade and 2,370 10th-grade students) from five waves of a school-based longitudinal study. The RSES was administered in each wave. The global self-esteem factor with two latent method factors yielded the best fit to the data. The global factor explained a large amount of the common variance (61% and 46%); however, a relatively large proportion of the common variance was attributed to the negative method factor (34 % and 41%), and a small proportion of the common variance was explained by the positive method factor (5% and 13%). We conceptualized the method effect as a response style and found that being a girl and having a higher number of depressive symptoms were associated with both low self-esteem and negative response style, as measured by the negative method factor. Our study supported the one global self-esteem construct and quantified the method effects in adolescents.
Lekking without a paradox in the buff-breasted sandpiper
Lanctot, Richard B.; Scribner, Kim T.; Kempenaers, Bart; Weatherhead, Patrick J.
1997-01-01
Females in lek‐breeding species appear to copulate with a small subset of the available males. Such strong directional selection is predicted to decrease additive genetic variance in the preferred male traits, yet females continue to mate selectively, thus generating the lek paradox. In a study of buff‐breasted sandpipers (Tryngites subruficollis), we combine detailed behavioral observations with paternity analyses using single‐locus minisatellite DNA probes to provide the first evidence from a lek‐breeding species that the variance in male reproductive success is much lower than expected. In 17 and 30 broods sampled in two consecutive years, a minimum of 20 and 39 males, respectively, sired offspring. This low variance in male reproductive success resulted from effective use of alternative reproductive tactics by males, females mating with solitary males off leks, and multiple mating by females. Thus, the results of this study suggests that sexual selection through female choice is weak in buff‐breasted sandpipers. The behavior of other lek‐breeding birds is sufficiently similar to that of buff‐breasted sandpipers that paternity studies of those species should be conducted to determine whether leks generally are less paradoxical than they appear.
Urbán, Róbert; Szigeti, Réka; Kökönyei, Gyöngyi; Demetrovics, Zsolt
2013-01-01
The Rosenberg Self-Esteem Scale (RSES) is a widely used measure for assessing self-esteem, but its factor structure is debated. Our goals were to compare 10 alternative models for RSES; and to quantify and predict the method effects. This sample involves two waves (N=2513 ninth-grade and 2370 tenth-grade students) from five waves of a school-based longitudinal study. RSES was administered in each wave. The global self-esteem factor with two latent method factors yielded the best fit to the data. The global factor explained large amount of the common variance (61% and 46%); however, a relatively large proportion of the common variance was attributed to the negative method factor (34 % and 41%), and a small proportion of the common variance was explained by the positive method factor (5% and 13%). We conceptualized the method effect as a response style, and found that being a girl and having higher number of depressive symptoms were associated with both low self-esteem and negative response style measured by the negative method factor. Our study supported the one global self-esteem construct and quantified the method effects in adolescents. PMID:24061931
NASA Technical Reports Server (NTRS)
Murphy, M. R.; Awe, C. A.
1986-01-01
Six professionally active, retired captains rated the coordination and decisionmaking performances of sixteen aircrews while viewing videotapes of a simulated commercial air transport operation. The scenario featured a required diversion and a probable minimum fuel situation. Seven point Likert-type scales were used in rating variables on the basis of a model of crew coordination and decisionmaking. The variables were based on concepts of, for example, decision difficulty, efficiency, and outcome quality; and leader-subordin ate concepts such as person and task-oriented leader behavior, and competency motivation of subordinate crewmembers. Five-front-end variables of the model were in turn dependent variables for a hierarchical regression procedure. The variance in safety performance was explained 46%, by decision efficiency, command reversal, and decision quality. The variance of decision quality, an alternative substantive dependent variable to safety performance, was explained 60% by decision efficiency and the captain's quality of within-crew communications. The variance of decision efficiency, crew coordination, and command reversal were in turn explained 78%, 80%, and 60% by small numbers of preceding independent variables. A principle component, varimax factor analysis supported the model structure suggested by regression analyses.
How Many Environmental Impact Indicators Are Needed in the Evaluation of Product Life Cycles?
Steinmann, Zoran J N; Schipper, Aafke M; Hauck, Mara; Huijbregts, Mark A J
2016-04-05
Numerous indicators are currently available for environmental impact assessments, especially in the field of Life Cycle Impact Assessment (LCIA). Because decision-making on the basis of hundreds of indicators simultaneously is unfeasible, a nonredundant key set of indicators representative of the overall environmental impact is needed. We aimed to find such a nonredundant set of indicators based on their mutual correlations. We have used Principal Component Analysis (PCA) in combination with an optimization algorithm to find an optimal set of indicators out of 135 impact indicators calculated for 976 products from the ecoinvent database. The first four principal components covered 92% of the variance in product rankings, showing the potential for indicator reduction. The same amount of variance (92%) could be covered by a minimal set of six indicators, related to climate change, ozone depletion, the combined effects of acidification and eutrophication, terrestrial ecotoxicity, marine ecotoxicity, and land use. In comparison, four commonly used resource footprints (energy, water, land, materials) together accounted for 84% of the variance in product rankings. We conclude that the plethora of environmental indicators can be reduced to a small key set, representing the major part of the variation in environmental impacts between product life cycles.
Williams, Larry J; O'Boyle, Ernest H
2015-09-01
A persistent concern in the management and applied psychology literature is the effect of common method variance on observed relations among variables. Recent work (i.e., Richardson, Simmering, & Sturman, 2009) evaluated 3 analytical approaches to controlling for common method variance, including the confirmatory factor analysis (CFA) marker technique. Their findings indicated significant problems with this technique, especially with nonideal marker variables (those with theoretical relations with substantive variables). Based on their simulation results, Richardson et al. concluded that not correcting for method variance provides more accurate estimates than using the CFA marker technique. We reexamined the effects of using marker variables in a simulation study and found the degree of error in estimates of a substantive factor correlation was relatively small in most cases, and much smaller than error associated with making no correction. Further, in instances in which the error was large, the correlations between the marker and substantive scales were higher than that found in organizational research with marker variables. We conclude that in most practical settings, the CFA marker technique yields parameter estimates close to their true values, and the criticisms made by Richardson et al. are overstated. (c) 2015 APA, all rights reserved).
Würschum, Tobias; Maurer, Hans Peter; Dreyer, Felix; Reif, Jochen C
2013-02-01
The loci detected by association mapping which are involved in the expression of important agronomic traits in crops often explain only a small proportion of the total genotypic variance. Here, 17 SNPs derived from 9 candidate genes from the triacylglycerol biosynthetic pathway were studied in an association analysis in a population of 685 diverse elite rapeseed inbred lines. The 685 lines were evaluated for oil content, as well as for glucosinolates, yield, and thousand-kernel weight in field trials at 4 locations. We detected main effects for most of the studied genes illustrating that genetic diversity for oil content can be exploited by the selection of favorable alleles. In addition to main effects, both intergenic and intragenic epistasis was detected that contributes to a considerable amount to the genotypic variance observed for oil content. The proportion of explained genotypic variance was doubled when in addition to main effects epistasis was considered. Therefore, a knowledge-based improvement of oil content in rapeseed should also take such favorable epistatic interactions into account. Our results suggest, that the observed high contribution of epistasis may to some extent explain the missing heritability in genome-wide association studies.
A Study on Mutil-Scale Background Error Covariances in 3D-Var Data Assimilation
NASA Astrophysics Data System (ADS)
Zhang, Xubin; Tan, Zhe-Min
2017-04-01
The construction of background error covariances is a key component of three-dimensional variational data assimilation. There are different scale background errors and interactions among them in the numerical weather Prediction. However, the influence of these errors and their interactions cannot be represented in the background error covariances statistics when estimated by the leading methods. So, it is necessary to construct background error covariances influenced by multi-scale interactions among errors. With the NMC method, this article firstly estimates the background error covariances at given model-resolution scales. And then the information of errors whose scales are larger and smaller than the given ones is introduced respectively, using different nesting techniques, to estimate the corresponding covariances. The comparisons of three background error covariances statistics influenced by information of errors at different scales reveal that, the background error variances enhance particularly at large scales and higher levels when introducing the information of larger-scale errors by the lateral boundary condition provided by a lower-resolution model. On the other hand, the variances reduce at medium scales at the higher levels, while those show slight improvement at lower levels in the nested domain, especially at medium and small scales, when introducing the information of smaller-scale errors by nesting a higher-resolution model. In addition, the introduction of information of larger- (smaller-) scale errors leads to larger (smaller) horizontal and vertical correlation scales of background errors. Considering the multivariate correlations, the Ekman coupling increases (decreases) with the information of larger- (smaller-) scale errors included, whereas the geostrophic coupling in free atmosphere weakens in both situations. The three covariances obtained in above work are used in a data assimilation and model forecast system respectively, and then the analysis-forecast cycles for a period of 1 month are conducted. Through the comparison of both analyses and forecasts from this system, it is found that the trends for variation in analysis increments with information of different scale errors introduced are consistent with those for variation in variances and correlations of background errors. In particular, introduction of smaller-scale errors leads to larger amplitude of analysis increments for winds at medium scales at the height of both high- and low- level jet. And analysis increments for both temperature and humidity are greater at the corresponding scales at middle and upper levels under this circumstance. These analysis increments improve the intensity of jet-convection system which includes jets at different levels and coupling between them associated with latent heat release, and these changes in analyses contribute to the better forecasts for winds and temperature in the corresponding areas. When smaller-scale errors are included, analysis increments for humidity enhance significantly at large scales at lower levels to moisten southern analyses. This humidification devotes to correcting dry bias there and eventually improves forecast skill of humidity. Moreover, inclusion of larger- (smaller-) scale errors is beneficial for forecast quality of heavy (light) precipitation at large (small) scales due to the amplification (diminution) of intensity and area in precipitation forecasts but tends to overestimate (underestimate) light (heavy) precipitation .
Wu, Rongli; Watanabe, Yoshiyuki; Satoh, Kazuhiko; Liao, Yen-Peng; Takahashi, Hiroto; Tanaka, Hisashi; Tomiyama, Noriyuki
2018-05-21
The aim of this study was to quantitatively compare the reduction in beam hardening artifact (BHA) and variance in computed tomography (CT) numbers of virtual monochromatic energy (VME) images obtained with 3 dual-energy computed tomography (DECT) systems at a given radiation dose. Five different iodine concentrations were scanned using dual-energy and single-energy (120 kVp) modes. The BHA and CT number variance were evaluated. For higher iodine concentrations, 40 and 80 mgI/mL, BHA on VME imaging was significantly decreased when the energy was higher than 50 keV (P = 0.003) and 60 keV (P < 0.001) for GE, higher than 80 keV (P < 0.001) and 70 keV (P = 0.002) for Siemens, and higher than 40 keV (P < 0.001) and 60 keV (P < 0.001) for Toshiba, compared with single-energy CT imaging. Virtual monochromatic energy imaging can decrease BHA and improve CT number accuracy in different dual-energy computed tomography systems, depending on energy levels and iodine concentrations.
Can Family Planning Service Statistics Be Used to Track Population-Level Outcomes?
Magnani, Robert J; Ross, John; Williamson, Jessica; Weinberger, Michelle
2018-03-21
The need for annual family planning program tracking data under the Family Planning 2020 (FP2020) initiative has contributed to renewed interest in family planning service statistics as a potential data source for annual estimates of the modern contraceptive prevalence rate (mCPR). We sought to assess (1) how well a set of commonly recorded data elements in routine service statistics systems could, with some fairly simple adjustments, track key population-level outcome indicators, and (2) whether some data elements performed better than others. We used data from 22 countries in Africa and Asia to analyze 3 data elements collected from service statistics: (1) number of contraceptive commodities distributed to clients, (2) number of family planning service visits, and (3) number of current contraceptive users. Data quality was assessed via analysis of mean square errors, using the United Nations Population Division World Contraceptive Use annual mCPR estimates as the "gold standard." We also examined the magnitude of several components of measurement error: (1) variance, (2) level bias, and (3) slope (or trend) bias. Our results indicate modest levels of tracking error for data on commodities to clients (7%) and service visits (10%), and somewhat higher error rates for data on current users (19%). Variance and slope bias were relatively small for all data elements. Level bias was by far the largest contributor to tracking error. Paired comparisons of data elements in countries that collected at least 2 of the 3 data elements indicated a modest advantage of data on commodities to clients. None of the data elements considered was sufficiently accurate to be used to produce reliable stand-alone annual estimates of mCPR. However, the relatively low levels of variance and slope bias indicate that trends calculated from these 3 data elements can be productively used in conjunction with the Family Planning Estimation Tool (FPET) currently used to produce annual mCPR tracking estimates for FP2020. © Magnani et al.
Andridge, Rebecca. R.
2011-01-01
In cluster randomized trials (CRTs), identifiable clusters rather than individuals are randomized to study groups. Resulting data often consist of a small number of clusters with correlated observations within a treatment group. Missing data often present a problem in the analysis of such trials, and multiple imputation (MI) has been used to create complete data sets, enabling subsequent analysis with well-established analysis methods for CRTs. We discuss strategies for accounting for clustering when multiply imputing a missing continuous outcome, focusing on estimation of the variance of group means as used in an adjusted t-test or ANOVA. These analysis procedures are congenial to (can be derived from) a mixed effects imputation model; however, this imputation procedure is not yet available in commercial statistical software. An alternative approach that is readily available and has been used in recent studies is to include fixed effects for cluster, but the impact of using this convenient method has not been studied. We show that under this imputation model the MI variance estimator is positively biased and that smaller ICCs lead to larger overestimation of the MI variance. Analytical expressions for the bias of the variance estimator are derived in the case of data missing completely at random (MCAR), and cases in which data are missing at random (MAR) are illustrated through simulation. Finally, various imputation methods are applied to data from the Detroit Middle School Asthma Project, a recent school-based CRT, and differences in inference are compared. PMID:21259309
A note on the kappa statistic for clustered dichotomous data.
Zhou, Ming; Yang, Zhao
2014-06-30
The kappa statistic is widely used to assess the agreement between two raters. Motivated by a simulation-based cluster bootstrap method to calculate the variance of the kappa statistic for clustered physician-patients dichotomous data, we investigate its special correlation structure and develop a new simple and efficient data generation algorithm. For the clustered physician-patients dichotomous data, based on the delta method and its special covariance structure, we propose a semi-parametric variance estimator for the kappa statistic. An extensive Monte Carlo simulation study is performed to evaluate the performance of the new proposal and five existing methods with respect to the empirical coverage probability, root-mean-square error, and average width of the 95% confidence interval for the kappa statistic. The variance estimator ignoring the dependence within a cluster is generally inappropriate, and the variance estimators from the new proposal, bootstrap-based methods, and the sampling-based delta method perform reasonably well for at least a moderately large number of clusters (e.g., the number of clusters K ⩾50). The new proposal and sampling-based delta method provide convenient tools for efficient computations and non-simulation-based alternatives to the existing bootstrap-based methods. Moreover, the new proposal has acceptable performance even when the number of clusters is as small as K = 25. To illustrate the practical application of all the methods, one psychiatric research data and two simulated clustered physician-patients dichotomous data are analyzed. Copyright © 2014 John Wiley & Sons, Ltd.
Qu, Long; Guennel, Tobias; Marshall, Scott L
2013-12-01
Following the rapid development of genome-scale genotyping technologies, genetic association mapping has become a popular tool to detect genomic regions responsible for certain (disease) phenotypes, especially in early-phase pharmacogenomic studies with limited sample size. In response to such applications, a good association test needs to be (1) applicable to a wide range of possible genetic models, including, but not limited to, the presence of gene-by-environment or gene-by-gene interactions and non-linearity of a group of marker effects, (2) accurate in small samples, fast to compute on the genomic scale, and amenable to large scale multiple testing corrections, and (3) reasonably powerful to locate causal genomic regions. The kernel machine method represented in linear mixed models provides a viable solution by transforming the problem into testing the nullity of variance components. In this study, we consider score-based tests by choosing a statistic linear in the score function. When the model under the null hypothesis has only one error variance parameter, our test is exact in finite samples. When the null model has more than one variance parameter, we develop a new moment-based approximation that performs well in simulations. Through simulations and analysis of real data, we demonstrate that the new test possesses most of the aforementioned characteristics, especially when compared to existing quadratic score tests or restricted likelihood ratio tests. © 2013, The International Biometric Society.
Impact of multicollinearity on small sample hydrologic regression models
NASA Astrophysics Data System (ADS)
Kroll, Charles N.; Song, Peter
2013-06-01
Often hydrologic regression models are developed with ordinary least squares (OLS) procedures. The use of OLS with highly correlated explanatory variables produces multicollinearity, which creates highly sensitive parameter estimators with inflated variances and improper model selection. It is not clear how to best address multicollinearity in hydrologic regression models. Here a Monte Carlo simulation is developed to compare four techniques to address multicollinearity: OLS, OLS with variance inflation factor screening (VIF), principal component regression (PCR), and partial least squares regression (PLS). The performance of these four techniques was observed for varying sample sizes, correlation coefficients between the explanatory variables, and model error variances consistent with hydrologic regional regression models. The negative effects of multicollinearity are magnified at smaller sample sizes, higher correlations between the variables, and larger model error variances (smaller R2). The Monte Carlo simulation indicates that if the true model is known, multicollinearity is present, and the estimation and statistical testing of regression parameters are of interest, then PCR or PLS should be employed. If the model is unknown, or if the interest is solely on model predictions, is it recommended that OLS be employed since using more complicated techniques did not produce any improvement in model performance. A leave-one-out cross-validation case study was also performed using low-streamflow data sets from the eastern United States. Results indicate that OLS with stepwise selection generally produces models across study regions with varying levels of multicollinearity that are as good as biased regression techniques such as PCR and PLS.
Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty
Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.
2016-09-12
Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less
Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.
Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less
ERIC Educational Resources Information Center
Salvucci, Sameena; And Others
This technical report provides the results of a study on the calculation and use of generalized variance functions (GVFs) and design effects for the 1990-91 Schools and Staffing Survey (SASS). The SASS is a periodic integrated system of sample surveys conducted by the National Center for Education Statistics (NCES) that produces sampling variances…
Dimensionality and noise in energy selective x-ray imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alvarez, Robert E.
Purpose: To develop and test a method to quantify the effect of dimensionality on the noise in energy selective x-ray imaging.Methods: The Cramèr-Rao lower bound (CRLB), a universal lower limit of the covariance of any unbiased estimator, is used to quantify the noise. It is shown that increasing dimensionality always increases, or at best leaves the same, the variance. An analytic formula for the increase in variance in an energy selective x-ray system is derived. The formula is used to gain insight into the dependence of the increase in variance on the properties of the additional basis functions, the measurementmore » noise covariance, and the source spectrum. The formula is also used with computer simulations to quantify the dependence of the additional variance on these factors. Simulated images of an object with three materials are used to demonstrate the trade-off of increased information with dimensionality and noise. The images are computed from energy selective data with a maximum likelihood estimator.Results: The increase in variance depends most importantly on the dimension and on the properties of the additional basis functions. With the attenuation coefficients of cortical bone, soft tissue, and adipose tissue as the basis functions, the increase in variance of the bone component from two to three dimensions is 1.4 × 10{sup 3}. With the soft tissue component, it is 2.7 × 10{sup 4}. If the attenuation coefficient of a high atomic number contrast agent is used as the third basis function, there is only a slight increase in the variance from two to three basis functions, 1.03 and 7.4 for the bone and soft tissue components, respectively. The changes in spectrum shape with beam hardening also have a substantial effect. They increase the variance by a factor of approximately 200 for the bone component and 220 for the soft tissue component as the soft tissue object thickness increases from 1 to 30 cm. Decreasing the energy resolution of the detectors increases the variance of the bone component markedly with three dimension processing, approximately a factor of 25 as the resolution decreases from 100 to 3 bins. The increase with two dimension processing for adipose tissue is a factor of two and with the contrast agent as the third material for two or three dimensions is also a factor of two for both components. The simulated images show that a maximum likelihood estimator can be used to process energy selective x-ray data to produce images with noise close to the CRLB.Conclusions: The method presented can be used to compute the effects of the object attenuation coefficients and the x-ray system properties on the relationship of dimensionality and noise in energy selective x-ray imaging systems.« less
Water movement through an experimental soil liner
Krapac, I.G.; Cartwright, K.; Panno, S.V.; Hensel, B.R.; Rehfeldt, K.R.; Herzog, B.L.
1991-01-01
A field-scale soil liner was constructed to test whether compacted soil barriers in cover and liner systems could be built to meet the U.S. EPA saturated hydraulic conductivity requirement (???1 x 10-7 cm s-1). The 8 x 15 x 0.9m liner was constructed in 15 cm compacted lifts using a 20,037 kg pad-foot compactor and standard engineering practices. Water infiltration into the liner has been monitored for one year. Monitoring will continue until water break through at the base of the liner occurs. Estimated saturated hydraulic conductivities were 2.5 x 10-9, 4.0 x 10-8, and 5.0 x 10-8 cm s-1 based on measurements of water infiltration into the liner by large- and small-ring infiltrometers and a water balance analysis, respectively. Also investigated in this research was the variability of the liner's hydraulic properties and estimates of the transit times for water and tracers. Small variances exhibited by small-ring flux data suggested that the liner was homogeneous with respect to infiltration fluxes. The predictions of water and tracer breakthrough at the base of the liner ranged from 2.4-12.6 y, depending on the method of calculation and assumptions made. The liner appeared to be saturated to a depth between 18 and 33 cm at the end of the first year of monitoring. Transit time calculations cannot be verified yet, since breakthrough has not occurred. The work conducted so far indicates that compacted soil barriers can be constructed to meet the saturated hydraulic conductivity requirement established by the U.S. EPA.A field-scale soil liner was constructed to test whether compacted soil barriers in cover and liner systems could be built to meet the U.S. EPA saturated hydraulic conductivity requirement (??? 1 ?? 10-7 cm s-1). The 8 ?? 15 ?? 0.9 m liner was constructed in 15 cm compacted lifts using a 20.037 kg pad-foot compactor and standard engineering practices. Water infiltration into the liner has been monitored for one year. Monitoring will continue until water break through at the base of the liner occurs. Estimated saturated hydraulic conductivities were 2.5 ?? 10-9, 4.0 ?? 10-8, and 5.0 ?? 10-8 cm s-1 based on measurements of water infiltration into the liner by large- and small-ring infiltrometers and a water balance analysis, respectively. Also investigated in this research was the variability of the liner's hydraulic properties and estimates of the transit times for water and tracers. Small variances exhibited by small-ring flux data suggested that the liner was homogeneous with respect to infiltration fluxes. The predictions of water and tracer breakthrough at the base of the liner ranged from 2.4-12.6 y, depending on the method of calculation and assumptions made. The liner appeared to be saturated to a depth between 18 and 33 cm at the end of the first year of monitoring. Transit time calculations cannot be verified yet, since breakthrough has not occurred. The work conducted so far indicates that compacted soil barriers can be constructed to meet the saturated hydraulic conductivity requirement established by the U.S. EPA.
Empirical Bayes estimation of undercount in the decennial census.
Cressie, N
1989-12-01
Empirical Bayes methods are used to estimate the extent of the undercount at the local level in the 1980 U.S. census. "Grouping of like subareas from areas such as states, counties, and so on into strata is a useful way of reducing the variance of undercount estimators. By modeling the subareas within a stratum to have a common mean and variances inversely proportional to their census counts, and by taking into account sampling of the areas (e.g., by dual-system estimation), empirical Bayes estimators that compromise between the (weighted) stratum average and the sample value can be constructed. The amount of compromise is shown to depend on the relative importance of stratum variance to sampling variance. These estimators are evaluated at the state level (51 states, including Washington, D.C.) and stratified on race/ethnicity (3 strata) using data from the 1980 postenumeration survey (PEP 3-8, for the noninstitutional population)." excerpt
Thermal noise variance of a receive radiofrequency coil as a respiratory motion sensor.
Andreychenko, A; Raaijmakers, A J E; Sbrizzi, A; Crijns, S P M; Lagendijk, J J W; Luijten, P R; van den Berg, C A T
2017-01-01
Development of a passive respiratory motion sensor based on the noise variance of the receive coil array. Respiratory motion alters the body resistance. The noise variance of an RF coil depends on the body resistance and, thus, is also modulated by respiration. For the noise variance monitoring, the noise samples were acquired without and with MR signal excitation on clinical 1.5/3 T MR scanners. The performance of the noise sensor was compared with the respiratory bellow and with the diaphragm displacement visible on MR images. Several breathing patterns were tested. The noise variance demonstrated a periodic, temporal modulation that was synchronized with the respiratory bellow signal. The modulation depth of the noise variance resulting from the respiration varied between the channels of the array and depended on the channel's location with respect to the body. The noise sensor combined with MR acquisition was able to detect the respiratory motion for every k-space read-out line. Within clinical MR systems, the respiratory motion can be detected by the noise in receive array. The noise sensor does not require careful positioning unlike the bellow, any additional hardware, and/or MR acquisition. Magn Reson Med 77:221-228, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Technical and biological variance structure in mRNA-Seq data: life in the real world
2012-01-01
Background mRNA expression data from next generation sequencing platforms is obtained in the form of counts per gene or exon. Counts have classically been assumed to follow a Poisson distribution in which the variance is equal to the mean. The Negative Binomial distribution which allows for over-dispersion, i.e., for the variance to be greater than the mean, is commonly used to model count data as well. Results In mRNA-Seq data from 25 subjects, we found technical variation to generally follow a Poisson distribution as has been reported previously and biological variability was over-dispersed relative to the Poisson model. The mean-variance relationship across all genes was quadratic, in keeping with a Negative Binomial (NB) distribution. Over-dispersed Poisson and NB distributional assumptions demonstrated marked improvements in goodness-of-fit (GOF) over the standard Poisson model assumptions, but with evidence of over-fitting in some genes. Modeling of experimental effects improved GOF for high variance genes but increased the over-fitting problem. Conclusions These conclusions will guide development of analytical strategies for accurate modeling of variance structure in these data and sample size determination which in turn will aid in the identification of true biological signals that inform our understanding of biological systems. PMID:22769017
The evaluation of phasemeter prototype performance for the space gravitational waves detection.
Liu, He-Shan; Dong, Yu-Hui; Li, Yu-Qiong; Luo, Zi-Ren; Jin, Gang
2014-02-01
Heterodyne laser interferometry is considered as the most promising readout scheme for future space gravitational wave detection missions, in which the gravitational wave signals disguise as small phase variances within the heterodyne beat note. This makes the phasemeter, which extracts the phase information from the beat note, the key device to this system. In this paper, a prototype of phasemeter based on digital phase-locked loop technology is developed, and the major noise sources which may contribute to the noise spectra density are analyzed in detail. Two experiments are also carried out to evaluate the performance of the phasemeter prototype. The results show that the sensitivity is achieved 2π μrad/√Hz in the frequency range of 0.04 Hz-10 Hz. Due to the effect of thermal drift, the noise obviously increases with the frequencies down to 0.1 mHz.
The evaluation of phasemeter prototype performance for the space gravitational waves detection
NASA Astrophysics Data System (ADS)
Liu, He-Shan; Dong, Yu-Hui; Li, Yu-Qiong; Luo, Zi-Ren; Jin, Gang
2014-02-01
Heterodyne laser interferometry is considered as the most promising readout scheme for future space gravitational wave detection missions, in which the gravitational wave signals disguise as small phase variances within the heterodyne beat note. This makes the phasemeter, which extracts the phase information from the beat note, the key device to this system. In this paper, a prototype of phasemeter based on digital phase-locked loop technology is developed, and the major noise sources which may contribute to the noise spectra density are analyzed in detail. Two experiments are also carried out to evaluate the performance of the phasemeter prototype. The results show that the sensitivity is achieved 2π μrad/√Hz in the frequency range of 0.04 Hz-10 Hz. Due to the effect of thermal drift, the noise obviously increases with the frequencies down to 0.1 mHz.
Harris, J. I.; Strom, Thad Q.; Ferrier-Auerbach, Amanda G.; Kaler, Matthew E.; Erbes, Christopher R.
2017-01-01
For Veterans managing PTSD symptoms, returning to vocational functioning is often challenging; identifying modifiable variables that can contribute to positive vocational adjustment is critical to improved vocational rehabilitation services. Workplace social support has proven to be important in vocational adjustment in both general population and vocational rehabilitation samples, but this area of inquiry has received little attention among Veterans with PTSD symptoms. In this small correlational study, employed Veterans (N = 63) presenting for outpatient PTSD treatment at a VA Health Care System completed surveys assessing demographic variables, PTSD symptoms, workplace social support, and job satisfaction. Workplace social support contributed to the prediction of job satisfaction. It is of note that workplace social support predicted a larger proportion of the variance in employment satisfaction than PTSD symptoms. Further research on workplace social support as a vocational rehabilitation resource for Veterans with PTSD is indicated. PMID:28777812
Harris, J I; Strom, Thad Q; Ferrier-Auerbach, Amanda G; Kaler, Matthew E; Hansen, Lucas P; Erbes, Christopher R
2017-01-01
For Veterans managing PTSD symptoms, returning to vocational functioning is often challenging; identifying modifiable variables that can contribute to positive vocational adjustment is critical to improved vocational rehabilitation services. Workplace social support has proven to be important in vocational adjustment in both general population and vocational rehabilitation samples, but this area of inquiry has received little attention among Veterans with PTSD symptoms. In this small correlational study, employed Veterans (N = 63) presenting for outpatient PTSD treatment at a VA Health Care System completed surveys assessing demographic variables, PTSD symptoms, workplace social support, and job satisfaction. Workplace social support contributed to the prediction of job satisfaction. It is of note that workplace social support predicted a larger proportion of the variance in employment satisfaction than PTSD symptoms. Further research on workplace social support as a vocational rehabilitation resource for Veterans with PTSD is indicated.
Signal, noise, and variation in neural and sensory-motor latency
Lee, Joonyeol; Joshua, Mati; Medina, Javier F.; Lisberger, Stephen G.
2016-01-01
Analysis of the neural code for sensory-motor latency in smooth pursuit eye movements reveals general principles of neural variation and the specific origin of motor latency. The trial-by-trial variation in neural latency in MT comprises: a shared component expressed as neuron-neuron latency correlations; and an independent component that is local to each neuron. The independent component arises heavily from fluctuations in the underlying probability of spiking with an unexpectedly small contribution from the stochastic nature of spiking itself. The shared component causes the latency of single neuron responses in MT to be weakly predictive of the behavioral latency of pursuit. Neural latency deeper in the motor system is more strongly predictive of behavioral latency. A model reproduces both the variance of behavioral latency and the neuron-behavior latency correlations in MT if it includes realistic neural latency variation, neuron-neuron latency correlations in MT, and noisy gain control downstream from MT. PMID:26971946
Statistical analysis of effective singular values in matrix rank determination
NASA Technical Reports Server (NTRS)
Konstantinides, Konstantinos; Yao, Kung
1988-01-01
A major problem in using SVD (singular-value decomposition) as a tool in determining the effective rank of a perturbed matrix is that of distinguishing between significantly small and significantly large singular values to the end, conference regions are derived for the perturbed singular values of matrices with noisy observation data. The analysis is based on the theories of perturbations of singular values and statistical significance test. Threshold bounds for perturbation due to finite-precision and i.i.d. random models are evaluated. In random models, the threshold bounds depend on the dimension of the matrix, the noisy variance, and predefined statistical level of significance. Results applied to the problem of determining the effective order of a linear autoregressive system from the approximate rank of a sample autocorrelation matrix are considered. Various numerical examples illustrating the usefulness of these bounds and comparisons to other previously known approaches are given.
Machine Learning Estimates of Natural Product Conformational Energies
Rupp, Matthias; Bauer, Matthias R.; Wilcken, Rainer; Lange, Andreas; Reutlinger, Michael; Boeckler, Frank M.; Schneider, Gisbert
2014-01-01
Machine learning has been used for estimation of potential energy surfaces to speed up molecular dynamics simulations of small systems. We demonstrate that this approach is feasible for significantly larger, structurally complex molecules, taking the natural product Archazolid A, a potent inhibitor of vacuolar-type ATPase, from the myxobacterium Archangium gephyra as an example. Our model estimates energies of new conformations by exploiting information from previous calculations via Gaussian process regression. Predictive variance is used to assess whether a conformation is in the interpolation region, allowing a controlled trade-off between prediction accuracy and computational speed-up. For energies of relaxed conformations at the density functional level of theory (implicit solvent, DFT/BLYP-disp3/def2-TZVP), mean absolute errors of less than 1 kcal/mol were achieved. The study demonstrates that predictive machine learning models can be developed for structurally complex, pharmaceutically relevant compounds, potentially enabling considerable speed-ups in simulations of larger molecular structures. PMID:24453952
Bueeler, Michael; Mrochen, Michael
2005-01-01
The aim of this theoretical work was to investigate the robustness of scanning spot laser treatments with different laser spot diameters and peak ablation depths in case of incomplete compensation of eye movements due to eye-tracker latency. Scanning spot corrections of 3rd to 5th Zernike order wavefront errors were numerically simulated. Measured eye-movement data were used to calculate the positioning error of each laser shot assuming eye-tracker latencies of 0, 5, 30, and 100 ms, and for the case of no eye tracking. The single spot ablation depth ranged from 0.25 to 1.0 microm and the spot diameter from 250 to 1000 microm. The quality of the ablation was rated by the postoperative surface variance and the Strehl intensity ratio, which was calculated after a low-pass filter was applied to simulate epithelial surface smoothing. Treatments performed with nearly ideal eye tracking (latency approximately 0) provide the best results with a small laser spot (0.25 mm) and a small ablation depth (250 microm). However, combinations of a large spot diameter (1000 microm) and a small ablation depth per pulse (0.25 microm) yield the better results for latencies above a certain threshold to be determined specifically. Treatments performed with tracker latencies in the order of 100 ms yield similar results as treatments done completely without eye-movement compensation. CONCWSIONS: Reduction of spot diameter was shown to make the correction more susceptible to eye movement induced error. A smaller spot size is only beneficial when eye movement is neutralized with a tracking system with a latency <5 ms.
Vanderick, S; Harris, B L; Pryce, J E; Gengler, N
2009-03-01
In New Zealand, a large proportion of cows are currently crossbreds, mostly Holstein-Friesians (HF) x Jersey (JE). The genetic evaluation system for milk yields is considering the same additive genetic effects for all breeds. The objective was to model different additive effects according to parental breeds to obtain first estimates of correlations among breed-specific effects and to study the usefulness of this type of random regression test-day model. Estimates of (co)variance components for purebred HF and JE cattle in purebred herds were computed by using a single-breed model. This analysis showed differences between the 2 breeds, with a greater variability in the HF breed. (Co)variance components for purebred HF and JE and crossbred HF x JE cattle were then estimated by using a complete multibreed model in which computations of complete across-breed (co)variances were simplified by correlating only eigenvectors for HF and JE random regressions of the same order as obtained from the single-breed analysis. Parameter estimates differed more strongly than expected between the single-breed and multibreed analyses, especially for JE. This could be due to differences between animals and management in purebred and non-purebred herds. In addition, the model used only partially accounted for heterosis. The multibreed analysis showed additive genetic differences between the HF and JE breeds, expressed as genetic correlations of additive effects in both breeds, especially in linear and quadratic Legendre polynomials (respectively, 0.807 and 0.604). The differences were small for overall milk production (0.926). Results showed that permanent environmental lactation curves were highly correlated across breeds; however, intraherd lactation curves were also affected by the breed-environment interaction. This result may indicate the existence of breed-specific competition effects that vary through the different lactation stages. In conclusion, a multibreed model similar to the one presented could optimally use the environmental and genetic parameters and provide breed-dependent additive breeding values. This model could also be a useful tool to evaluate crossbred dairy cattle populations like those in New Zealand. However, a routine evaluation would still require the development of an improved methodology. It would also be computationally very challenging because of the simultaneous presence of a large number of breeds.
Hypothesis exploration with visualization of variance
2014-01-01
Background The Consortium for Neuropsychiatric Phenomics (CNP) at UCLA was an investigation into the biological bases of traits such as memory and response inhibition phenotypes—to explore whether they are linked to syndromes including ADHD, Bipolar disorder, and Schizophrenia. An aim of the consortium was in moving from traditional categorical approaches for psychiatric syndromes towards more quantitative approaches based on large-scale analysis of the space of human variation. It represented an application of phenomics—wide-scale, systematic study of phenotypes—to neuropsychiatry research. Results This paper reports on a system for exploration of hypotheses in data obtained from the LA2K, LA3C, and LA5C studies in CNP. ViVA is a system for exploratory data analysis using novel mathematical models and methods for visualization of variance. An example of these methods is called VISOVA, a combination of visualization and analysis of variance, with the flavor of exploration associated with ANOVA in biomedical hypothesis generation. It permits visual identification of phenotype profiles—patterns of values across phenotypes—that characterize groups. Visualization enables screening and refinement of hypotheses about variance structure of sets of phenotypes. Conclusions The ViVA system was designed for exploration of neuropsychiatric hypotheses by interdisciplinary teams. Automated visualization in ViVA supports ‘natural selection’ on a pool of hypotheses, and permits deeper understanding of the statistical architecture of the data. Large-scale perspective of this kind could lead to better neuropsychiatric diagnostics. PMID:25097666
Herbison, N; Cobb, S; Gregson, R; Ash, I; Eastgate, R; Purdy, J; Hepburn, T; MacKeith, D; Foss, A
2013-09-01
A computer-based interactive binocular treatment system (I-BiT) for amblyopia has been developed, which utilises commercially available 3D 'shutter glasses'. The purpose of this pilot study was to report the effect of treatment on visual acuity (VA) in children with amblyopia. Thirty minutes of I-BiT treatment was given once weekly for 6 weeks. Treatment sessions consisted of playing a computer game and watching a DVD through the I-BiT system. VA was assessed at baseline, mid-treatment, at the end of treatment, and at 4 weeks post treatment. Standard summary statistics and an exploratory one-way analysis of variance (ANOVA) were performed. Ten patients were enrolled with strabismic, anisometropic, or mixed amblyopia. The mean age was 5.4 years. Nine patients (90%) completed the full course of I-BiT treatment with a mean improvement of 0.18 (SD=0.143). Six out of nine patients (67%) who completed the treatment showed a clinically significant improvement of 0.125 LogMAR units or more at follow-up. The exploratory one-way ANOVA showed an overall effect over time (F=7.95, P=0.01). No adverse effects were reported. This small, uncontrolled study has shown VA gains with 3 hours of I-BiT treatment. Although it is recognised that this pilot study had significant limitations-it was unblinded, uncontrolled, and too small to permit formal statistical analysis-these results suggest that further investigation of I-BiT treatment is worthwhile.
Blair, Stephanie; Duthie, Grant; Robertson, Sam; Hopkins, William; Ball, Kevin
2018-05-17
Wearable inertial measurement systems (IMS) allow for three-dimensional analysis of human movements in a sport-specific setting. This study examined the concurrent validity of a IMS (Xsens MVN system) for measuring lower extremity and pelvis kinematics in comparison to a Vicon motion analysis system (MAS) during kicking. Thirty footballers from Australian football (n = 10), soccer (n = 10), rugby league and rugby union (n = 10) clubs completed 20 kicks across four conditions. Concurrent validity was assessed using a linear mixed-modelling approach, which allowed the partition of between and within-subject variance from the device measurement error. Results were expressed in raw and standardised units for assessments of differences in means and measurement error, and interpreted via non-clinical magnitude-based inferences. Trivial to small differences were found in linear velocities (foot and pelvis), angular velocities (knee, shank and thigh), sagittal joint (knee and hip) and segment angle (shank and pelvis) means (mean difference: 0.2-5.8%) between the IMS and MAS in Australian football, soccer and the rugby codes. Trivial to small measurement errors (from 0.1 to 5.8%) were found between the IMS and MAS in all kinematic parameters. The IMS demonstrated acceptable levels of concurrent validity compared to a MAS when measuring kicking biomechanics across the four football codes. Wearable IMS offers various benefits over MAS, such as, out-of-laboratory testing, larger measurement range and quick data output, to help improve the ecological validity of biomechanical testing and the timing of feedback. The results advocate the use of IMS to quantify biomechanics of high-velocity movements in sport-specific settings. Copyright © 2018 Elsevier Ltd. All rights reserved.
Measuring kinetics of complex single ion channel data using mean-variance histograms.
Patlak, J B
1993-01-01
The measurement of single ion channel kinetics is difficult when those channels exhibit subconductance events. When the kinetics are fast, and when the current magnitudes are small, as is the case for Na+, Ca2+, and some K+ channels, these difficulties can lead to serious errors in the estimation of channel kinetics. I present here a method, based on the construction and analysis of mean-variance histograms, that can overcome these problems. A mean-variance histogram is constructed by calculating the mean current and the current variance within a brief "window" (a set of N consecutive data samples) superimposed on the digitized raw channel data. Systematic movement of this window over the data produces large numbers of mean-variance pairs which can be assembled into a two-dimensional histogram. Defined current levels (open, closed, or sublevel) appear in such plots as low variance regions. The total number of events in such low variance regions is estimated by curve fitting and plotted as a function of window width. This function decreases with the same time constants as the original dwell time probability distribution for each of the regions. The method can therefore be used: 1) to present a qualitative summary of the single channel data from which the signal-to-noise ratio, open channel noise, steadiness of the baseline, and number of conductance levels can be quickly determined; 2) to quantify the dwell time distribution in each of the levels exhibited. In this paper I present the analysis of a Na+ channel recording that had a number of complexities. The signal-to-noise ratio was only about 8 for the main open state, open channel noise, and fast flickers to other states were present, as were a substantial number of subconductance states. "Standard" half-amplitude threshold analysis of these data produce open and closed time histograms that were well fitted by the sum of two exponentials, but with apparently erroneous time constants, whereas the mean-variance histogram technique provided a more credible analysis of the open, closed, and subconductance times for the patch. I also show that the method produces accurate results on simulated data in a wide variety of conditions, whereas the half-amplitude method, when applied to complex simulated data shows the same errors as were apparent in the real data. The utility and the limitations of this new method are discussed. Images FIGURE 2 FIGURE 4 FIGURE 8 FIGURE 9 PMID:7690261
Genetic variations in the serotonergic system contribute to amygdala volume in humans
Li, Jin; Chen, Chunhui; Wu, Karen; Zhang, Mingxia; Zhu, Bi; Chen, Chuansheng; Moyzis, Robert K.; Dong, Qi
2015-01-01
The amygdala plays a critical role in emotion processing and psychiatric disorders associated with emotion dysfunction. Accumulating evidence suggests that amygdala structure is modulated by serotonin-related genes. However, there is a gap between the small contributions of single loci (less than 1%) and the reported 63–65% heritability of amygdala structure. To understand the “missing heritability,” we systematically explored the contribution of serotonin genes on amygdala structure at the gene set level. The present study of 417 healthy Chinese volunteers examined 129 representative polymorphisms in genes from multiple biological mechanisms in the regulation of serotonin neurotransmission. A system-level approach using multiple regression analyses identified that nine SNPs collectively accounted for approximately 8% of the variance in amygdala volume. Permutation analyses showed that the probability of obtaining these findings by chance was low (p = 0.043, permuted for 1000 times). Findings showed that serotonin genes contribute moderately to individual differences in amygdala volume in a healthy Chinese sample. These results indicate that the system-level approach can help us to understand the genetic basis of a complex trait such as amygdala structure. PMID:26500508
NASA Technical Reports Server (NTRS)
Prive, N. C.; Errico, R. M.; Tai, K.-S.
2013-01-01
The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.
Enhanced hyperuniformity from random reorganization.
Hexner, Daniel; Chaikin, Paul M; Levine, Dov
2017-04-25
Diffusion relaxes density fluctuations toward a uniform random state whose variance in regions of volume [Formula: see text] scales as [Formula: see text] Systems whose fluctuations decay faster, [Formula: see text] with [Formula: see text], are called hyperuniform. The larger [Formula: see text], the more uniform, with systems like crystals achieving the maximum value: [Formula: see text] Although finite temperature equilibrium dynamics will not yield hyperuniform states, driven, nonequilibrium dynamics may. Such is the case, for example, in a simple model where overlapping particles are each given a small random displacement. Above a critical particle density [Formula: see text], the system evolves forever, never finding a configuration where no particles overlap. Below [Formula: see text], however, it eventually finds such a state, and stops evolving. This "absorbing state" is hyperuniform up to a length scale [Formula: see text], which diverges at [Formula: see text] An important question is whether hyperuniformity survives noise and thermal fluctuations. We find that hyperuniformity of the absorbing state is not only robust against noise, diffusion, or activity, but that such perturbations reduce fluctuations toward their limiting behavior, [Formula: see text], a uniformity similar to random close packing and early universe fluctuations, but with arbitrary controllable density.
Bertocci, Iacopo; Arenas, Francisco; Cacabelos, Eva; Martins, Gustavo M; Seabra, Maria I; Álvaro, Nuno V; Fernandes, Joana N; Gaião, Raquel; Mamede, Nuno; Mulas, Martina; Neto, Ana I
2017-01-30
Differences in the structure and functioning of intensively urbanized vs. less human-affected systems are reported, but such evidence is available for a much larger extent in terrestrial than in marine systems. We examined the hypotheses that (i) urbanization was associated to different patterns of variation of intertidal assemblages between urban and extra-urban environments; (ii) such patterns were consistent across mainland and insular systems, spatial scales from 10scm to 100skm, and a three months period. Several trends emerged: (i) a more homogeneous distribution of most algal groups in the urban compared to the extra-urban condition and the opposite pattern of most invertebrates; (ii) smaller/larger variances of most organisms where these were, respectively, less/more abundant; (iii) largest variability of most response variables at small scale; (iv) no facilitation of invasive species by urbanization and larger cover of canopy-forming algae in the insular extra-urban condition. Present findings confirm the acknowledged notion that future management strategies will require to include representative assemblages and their relevant scales of variation associated to urbanization gradients on both the mainland and the islands. Copyright © 2016 Elsevier Ltd. All rights reserved.
Genetic variations associated with six-white-point coat pigmentation in Diannan small-ear pigs
Lü, Meng-Die; Han, Xu-Man; Ma, Yun-Fei; Irwin, David M.; Gao, Yun; Deng, Jia-Kun; Adeola, Adeniyi C.; Xie, Hai-Bing; Zhang, Ya-Ping
2016-01-01
A common phenotypic difference among domestic animals is variation in coat color. Six-white-point is a pigmentation pattern observed in varying pig breeds, which seems to have evolved through several different mechanistic pathways. Herein, we re-sequenced whole genomes of 31 Diannan small-ear pigs from China and found that the six-white-point coat color in Diannan small-ear pigs is likely regulated by polygenic loci, rather than by the MC1R locus. Strong associations were observed at three loci (EDNRB, CNTLN, and PINK1), which explain about 20 percent of the total coat color variance in the Diannan small-ear pigs. We found a mutation that is highly differentiated between six-white-point and black Diannan small-ear pigs, which is located in a conserved noncoding sequence upstream of the EDNRB gene and is a putative binding site of the CEBPB protein. This study advances our understanding of coat color evolution in Diannan small-ear pigs and expands our traditional knowledge of coat color being a monogenic trait. PMID:27270507
Business owners' optimism and business performance after a natural disaster.
Bronson, James W; Faircloth, James B; Valentine, Sean R
2006-12-01
Previous work indicates that individuals' optimism is related to superior performance in adverse situations. This study examined correlations after flooding for measures of business recovery but found only weak support (very small common variance) for business owners' optimism scores and sales recovery. Using traditional measures of recovery, in this study was little empirical evidence that optimism would be of value in identifying businesses at risk after a natural disaster.
Small-scale grassland assembly patterns differ above and below the soil surface.
Price, Jodi N; Hiiesalu, Inga; Gerhold, Pille; Pärtel, Meelis
2012-06-01
The existence of deterministic assembly rules for plant communities remains an important and unresolved topic in ecology. Most studies examining community assembly have sampled aboveground species diversity and composition. However, plants also coexist belowground, and many coexistence theories invoke belowground competition as an explanation for aboveground patterns. We used next-generation sequencing that enables the identification of roots and rhizomes from mixed-species samples to measure coexisting species at small scales in temperate grasslands. We used comparable data from above (conventional methods) and below (molecular techniques) the soil surface (0.1 x 0.1 x 0.1 m volume). To detect evidence for nonrandom patterns in the direction of biotic or abiotic assembly processes, we used three assembly rules tests (richness variance, guild proportionality, and species co-occurrence indices) as well as pairwise association tests. We found support for biotic assembly rules aboveground, with lower variance in species richness than expected and more negative species associations. Belowground plant communities were structured more by abiotic processes, with greater variability in richness and guild proportionality than expected. Belowground assembly is largely driven by abiotic processes, with little evidence for competition-driven assembly, and this has implications for plant coexistence theories that are based on competition for soil resources.
Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.
Sztepanacz, Jacqueline L; Blows, Mark W
2017-07-01
The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.
Optical Pattern Recognition for Missile Guidance.
1979-10-01
to the voltage dependent sensitometry noted earlier, to the low lIE intensity available and to the broadband nature of the XE source used. Erase...same form as measures na ri pi vie, crereas Fig. 6) that was used to control the modulator. measures, namely, carrier period variance , carrier phase...This equalizing correlator system is another method modulation or phase variance , and instantaneous fre- by which the flexibility and repertoire of
A Sociotechnical Systems Approach To Coastal Marine Spatial Planning
2016-12-01
the authors followed the MEAD step of identifying variances and creating a matrix of these variances. Then the authors were able to propose methods ...potential politics involved, and the risks involved in proposing and attempting to start up a new marine aquaculture operation. 69 Figure 16. Role...10 16. DLNR Board Responsiveness/Review Time 17. Assessment Value Redesign Suggestions • Have a coordinating group or person (with knowledge
NASA Astrophysics Data System (ADS)
Sharma, P.; Kumawat, J.; Kumar, S.; Sahu, K.; Verma, Y.; Gupta, P. K.; Rao, K. D.
2018-02-01
We report on a study to assess the feasibility of a swept source-based speckle variance optical coherence tomography setup for monitoring cutaneous microvasculature. Punch wounds created in the ear pinnae of diabetic mice were monitored at different times post wounding to assess the structural and vascular changes. It was observed that the epithelium thickness increases post wounding and continues to be thick even after healing. Also, the wound size assessed by vascular images is larger than the physical wound size. The results show that the developed speckle variance optical coherence tomography system can be used to monitor vascular regeneration during wound healing in diabetic mice.
Lah, Suncica; Smith, Mary Lou
2014-01-01
Children with temporal lobe epilepsy are at risk for deficits in new learning (episodic memory) and literacy skills. Semantic memory deficits and double dissociations between episodic and semantic memory have recently been found in this patient population. In the current study we investigate whether impairments of these 2 distinct memory systems relate to literacy skills. 57 children with unilateral temporal lobe epilepsy completed tests of verbal memory (episodic and semantic) and literacy skills (reading and spelling accuracy, and reading comprehension). For the entire group, semantic memory explained over 30% of variance in each of the literacy domains. Episodic memory explained a significant, but rather small proportion (< 10%) of variance in reading and spelling accuracy, but not in reading comprehension. Moreover, when children with opposite patterns of specific memory impairments (intact semantic/impaired episodic, intact episodic/impaired semantic) were compared, significant reductions in literacy skills were evident only in children with semantic memory impairments, but not in children with episodic memory impairments relative to the norms and to children with temporal lobe epilepsy who had intact memory. Our study provides the first evidence for differential relations between episodic and semantic memory impairments and literacy skills in children with temporal lobe epilepsy. As such, it highlights the urgent need to consider semantic memory deficits in management of children with temporal lobe epilepsy and undertake further research into the nature of reading difficulties of children with semantic memory impairments.
Trends in Classroom Observation Scores.
Casabianca, Jodi M; Lockwood, J R; McCaffrey, Daniel F
2015-04-01
Observations and ratings of classroom teaching and interactions collected over time are susceptible to trends in both the quality of instruction and rater behavior. These trends have potential implications for inferences about teaching and for study design. We use scores on the Classroom Assessment Scoring System-Secondary (CLASS-S) protocol from 458 middle school teachers over a 2-year period to study changes over time in (a) the average quality of teaching for the population of teachers, (b) the average severity of the population of raters, and (c) the severity of individual raters. To obtain these estimates and assess them in the context of other factors that contribute to the variability in scores, we develop an augmented G study model that is broadly applicable for modeling sources of variability in classroom observation ratings data collected over time. In our data, we found that trends in teaching quality were small. Rater drift was very large during raters' initial days of observation and persisted throughout nearly 2 years of scoring. Raters did not converge to a common level of severity; using our model we estimate that variability among raters actually increases over the course of the study. Variance decompositions based on the model find that trends are a modest source of variance relative to overall rater effects, rater errors on specific lessons, and residual error. The discussion provides possible explanations for trends and rater divergence as well as implications for designs collecting ratings over time.
Lifelong haematopoiesis is established by hundreds of precursors throughout mammalian ontogeny.
Ganuza, Miguel; Hall, Trent; Finkelstein, David; Chabot, Ashley; Kang, Guolian; McKinney-Freeman, Shannon
2017-10-01
Current dogma asserts that mammalian lifelong blood production is established by a small number of blood progenitors. However, this model is based on assays that require the disruption, transplantation and/or culture of embryonic tissues. Here, we used the sample-to-sample variance of a multicoloured lineage trace reporter to assess the frequency of emerging lifelong blood progenitors while avoiding the disruption, culture or transplantation of embryos. We find that approximately 719 Flk1 + mesodermal precursors, 633 VE-cadherin + endothelial precursors and 545 Vav1 + nascent blood stem and progenitor cells emerge to establish the haematopoietic system at embryonic days (E)7-E8.5, E8.5-E11.5 and E11.5-E14.5, respectively. We also determined that the spatio-temporal recruitment of endothelial blood precursors begins at E8.5 and ends by E10.5, and that many c-Kit + clusters of newly specified blood progenitors in the aorta are polyclonal in origin. Our work illuminates the dynamics of the developing mammalian blood system during homeostasis.
Analysis of Application Power and Schedule Composition in a High Performance Computing Environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elmore, Ryan; Gruchalla, Kenny; Phillips, Caleb
As the capacity of high performance computing (HPC) systems continues to grow, small changes in energy management have the potential to produce significant energy savings. In this paper, we employ an extensive informatics system for aggregating and analyzing real-time performance and power use data to evaluate energy footprints of jobs running in an HPC data center. We look at the effects of algorithmic choices for a given job on the resulting energy footprints, and analyze application-specific power consumption, and summarize average power use in the aggregate. All of these views reveal meaningful power variance between classes of applications as wellmore » as chosen methods for a given job. Using these data, we discuss energy-aware cost-saving strategies based on reordering the HPC job schedule. Using historical job and power data, we present a hypothetical job schedule reordering that: (1) reduces the facility's peak power draw and (2) manages power in conjunction with a large-scale photovoltaic array. Lastly, we leverage this data to understand the practical limits on predicting key power use metrics at the time of submission.« less
Diversity-optimal power loading for intensity modulated MIMO optical wireless communications.
Zhang, Yan-Yu; Yu, Hong-Yi; Zhang, Jian-Kang; Zhu, Yi-Jun
2016-04-18
In this paper, we consider the design of space code for an intensity modulated direct detection multi-input-multi-output optical wireless communication (IM/DD MIMO-OWC) system, in which channel coefficients are independent and non-identically log-normal distributed, with variances and means known at the transmitter and channel state information available at the receiver. Utilizing the existing space code design criterion for IM/DD MIMO-OWC with a maximum likelihood (ML) detector, we design a diversity-optimal space code (DOSC) that maximizes both large-scale diversity and small-scale diversity gains and prove that the spatial repetition code (RC) with a diversity-optimized power allocation is diversity-optimal among all the high dimensional nonnegative space code schemes under a commonly used optical power constraint. In addition, we show that one of significant advantages of the DOSC is to allow low-complexity ML detection. Simulation results indicate that in high signal-to-noise ratio (SNR) regimes, our proposed DOSC significantly outperforms RC, which is the best space code currently available for such system.
Radiometric errors in complex Fourier transform spectrometry.
Sromovsky, Lawrence A
2003-04-01
A complex spectrum arises from the Fourier transform of an asymmetric interferogram. A rigorous derivation shows that the rms noise in the real part of that spectrum is indeed given by the commonly used relation sigmaR = 2X x NEP/(etaAomega square root(tauN)), where NEP is the delay-independent and uncorrelated detector noise-equivalent power per unit bandwidth, +/- X is the delay range measured with N samples averaging for a time tau per sample, eta is the system optical efficiency, and Aomega is the system throughput. A real spectrum produced by complex calibration with two complex reference spectra [Appl. Opt. 27, 3210 (1988)] has a variance sigmaL2 = sigmaR2 + sigma(c)2 (Lh - Ls)2/(Lh - Lc)2 + sigma(h)2 (Ls - Lc)2/(Lh - Lc)2, valid for sigmaR, sigma(c), and sigma(h) small compared with Lh - Lc, where Ls, Lh, and Lc are scene, hot reference, and cold reference spectra, respectively, and sigma(c) and sigma(h) are the respective combined uncertainties in knowledge and measurement of the hot and cold reference spectra.
Evaluating Security Controls Based on Key Performance Indicators and Stakeholder Mission
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheldon, Frederick T; Abercrombie, Robert K; Mili, Ali
2008-01-01
Good security metrics are required to make good decisions about how to design security countermeasures, to choose between alternative security architectures, and to improve security during operations. Therefore, in essence, measurement can be viewed as a decision aid. The lack of sound practical security metrics is severely hampering progress in the development of secure systems. The Cyberspace Security Econometrics System (CSES) offers the following advantages over traditional measurement systems: (1) CSES reflects the variances that exist amongst different stakeholders of the same system. Different stakeholders will typically attach different stakes to the same requirement or service (e.g., a service maymore » be provided by an information technology system or process control system, etc.). (2) For a given stakeholder, CSES reflects the variance that may exist among the stakes she/he attaches to meeting each requirement. The same stakeholder may attach different stakes to satisfying different requirements within the overall system specification. (3) For a given compound specification (e.g., combination(s) of commercial off the shelf software and/or hardware), CSES reflects the variance that may exist amongst the levels of verification and validation (i.e., certification) performed on components of the specification. The certification activity may produce higher levels of assurance across different components of the specification than others. Consequently, this paper introduces the basis, objectives and capabilities for the CSES including inputs/outputs and the basic structural and mathematical underpinnings.« less
Schilling, Oleg; Mueschke, Nicholas J.
2010-10-18
Data from a 1152X760X1280 direct numerical simulation (DNS) of a transitional Rayleigh-Taylor mixing layer modeled after a small Atwood number water channel experiment is used to comprehensively investigate the structure of mean and turbulent transport and mixing. The simulation had physical parameters and initial conditions approximating those in the experiment. The budgets of the mean vertical momentum, heavy-fluid mass fraction, turbulent kinetic energy, turbulent kinetic energy dissipation rate, heavy-fluid mass fraction variance, and heavy-fluid mass fraction variance dissipation rate equations are constructed using Reynolds averaging applied to the DNS data. The relative importance of mean and turbulent production, turbulent dissipationmore » and destruction, and turbulent transport are investigated as a function of Reynolds number and across the mixing layer to provide insight into the flow dynamics not presently available from experiments. The analysis of the budgets supports the assumption for small Atwood number, Rayleigh/Taylor driven flows that the principal transport mechanisms are buoyancy production, turbulent production, turbulent dissipation, and turbulent diffusion (shear and mean field production are negligible). As the Reynolds number increases, the turbulent production in the turbulent kinetic energy dissipation rate equation becomes the dominant production term, while the buoyancy production plateaus. Distinctions between momentum and scalar transport are also noted, where the turbulent kinetic energy and its dissipation rate both grow in time and are peaked near the center plane of the mixing layer, while the heavy-fluid mass fraction variance and its dissipation rate initially grow and then begin to decrease as mixing progresses and reduces density fluctuations. All terms in the transport equations generally grow or decay, with no qualitative change in their profile, except for the pressure flux contribution to the total turbulent kinetic energy flux, which changes sign early in time (a countergradient effect). The production-to-dissipation ratios corresponding to the turbulent kinetic energy and heavy-fluid mass fraction variance are large and vary strongly at small evolution times, decrease with time, and nearly asymptote as the flow enters a self-similar regime. The late-time turbulent kinetic energy production-to-dissipation ratio is larger than observed in shear-driven turbulent flows. The order of magnitude estimates of the terms in the transport equations are shown to be consistent with the DNS at late-time, and also confirms both the dominant terms and their evolutionary behavior. Thus, these results are useful for identifying the dynamically important terms requiring closure, and assessing the accuracy of the predictions of Reynolds-averaged Navier-Stokes and large-eddy simulation models of turbulent transport and mixing in transitional Rayleigh-Taylor instability-generated flow.« less
Chakraborty, Sudip; Fu, Rong; Massie, Steven T; Stephens, Graeme
2016-07-05
Using collocated measurements from geostationary and polar-orbital satellites over tropical continents, we provide a large-scale statistical assessment of the relative influence of aerosols and meteorological conditions on the lifetime of mesoscale convective systems (MCSs). Our results show that MCSs' lifetime increases by 3-24 h when vertical wind shear (VWS) and convective available potential energy (CAPE) are moderate to high and ambient aerosol optical depth (AOD) increases by 1 SD (1σ). However, this influence is not as strong as that of CAPE, relative humidity, and VWS, which increase MCSs' lifetime by 3-30 h, 3-27 h, and 3-30 h per 1σ of these variables and explain up to 36%, 45%, and 34%, respectively, of the variance of the MCSs' lifetime. AOD explains up to 24% of the total variance of MCSs' lifetime during the decay phase. This result is physically consistent with that of the variation of the MCSs' ice water content (IWC) with aerosols, which accounts for 35% and 27% of the total variance of the IWC in convective cores and anvil, respectively, during the decay phase. The effect of aerosols on MCSs' lifetime varies between different continents. AOD appears to explain up to 20-22% of the total variance of MCSs' lifetime over equatorial South America compared with 8% over equatorial Africa. Aerosols over the Indian Ocean can explain 20% of total variance of MCSs' lifetime over South Asia because such MCSs form and develop over the ocean. These regional differences of aerosol impacts may be linked to different meteorological conditions.
An improved method for bivariate meta-analysis when within-study correlations are unknown.
Hong, Chuan; D Riley, Richard; Chen, Yong
2018-03-01
Multivariate meta-analysis, which jointly analyzes multiple and possibly correlated outcomes in a single analysis, is becoming increasingly popular in recent years. An attractive feature of the multivariate meta-analysis is its ability to account for the dependence between multiple estimates from the same study. However, standard inference procedures for multivariate meta-analysis require the knowledge of within-study correlations, which are usually unavailable. This limits standard inference approaches in practice. Riley et al proposed a working model and an overall synthesis correlation parameter to account for the marginal correlation between outcomes, where the only data needed are those required for a separate univariate random-effects meta-analysis. As within-study correlations are not required, the Riley method is applicable to a wide variety of evidence synthesis situations. However, the standard variance estimator of the Riley method is not entirely correct under many important settings. As a consequence, the coverage of a function of pooled estimates may not reach the nominal level even when the number of studies in the multivariate meta-analysis is large. In this paper, we improve the Riley method by proposing a robust variance estimator, which is asymptotically correct even when the model is misspecified (ie, when the likelihood function is incorrect). Simulation studies of a bivariate meta-analysis, in a variety of settings, show a function of pooled estimates has improved performance when using the proposed robust variance estimator. In terms of individual pooled estimates themselves, the standard variance estimator and robust variance estimator give similar results to the original method, with appropriate coverage. The proposed robust variance estimator performs well when the number of studies is relatively large. Therefore, we recommend the use of the robust method for meta-analyses with a relatively large number of studies (eg, m≥50). When the sample size is relatively small, we recommend the use of the robust method under the working independence assumption. We illustrate the proposed method through 2 meta-analyses. Copyright © 2017 John Wiley & Sons, Ltd.
Measurement System Characterization in the Presence of Measurement Errors
NASA Technical Reports Server (NTRS)
Commo, Sean A.
2012-01-01
In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.
Atmospheric turbulence effects measured along horizontal-path optical retro-reflector links.
Mahon, Rita; Moore, Christopher I; Ferraro, Mike; Rabinovich, William S; Suite, Michele R
2012-09-01
The scintillation measured over close-to-ground retro-reflector links can be substantially enhanced due to the correlations experienced by both the direct and reflected echo beams. Experiments were carried out at China Lake, California, over a variety of ranges. The emphasis in this paper is on presenting the data from the 1.1 km retro-reflecting link that was operated for four consecutive days. The dependence of the measured irradiance flux variance on the solar fluence and on the temperature gradient above the ground is presented. The data are consistent with scintillation minima near sunrise and sunset, rising rapidly during the day and saturating at irradiance flux variances of ~10. Measured irradiance probability distributions of the retro-reflected beam are compared with standard probability density functions. The ratio of the irradiance flux variances on the retro-reflected to the direct, single-pass case is investigated with two data sets, one from a monostatic system and the other using an off-axis receiver system.
Imaging shear wave propagation for elastic measurement using OCT Doppler variance method
NASA Astrophysics Data System (ADS)
Zhu, Jiang; Miao, Yusi; Qu, Yueqiao; Ma, Teng; Li, Rui; Du, Yongzhao; Huang, Shenghai; Shung, K. Kirk; Zhou, Qifa; Chen, Zhongping
2016-03-01
In this study, we have developed an acoustic radiation force orthogonal excitation optical coherence elastography (ARFOE-OCE) method for the visualization of the shear wave and the calculation of the shear modulus based on the OCT Doppler variance method. The vibration perpendicular to the OCT detection direction is induced by the remote acoustic radiation force (ARF) and the shear wave propagating along the OCT beam is visualized by the OCT M-scan. The homogeneous agar phantom and two-layer agar phantom are measured using the ARFOE-OCE system. The results show that the ARFOE-OCE system has the ability to measure the shear modulus beyond the OCT imaging depth. The OCT Doppler variance method, instead of the OCT Doppler phase method, is used for vibration detection without the need of high phase stability and phase wrapping correction. An M-scan instead of the B-scan for the visualization of the shear wave also simplifies the data processing.
Self-monitoring of driving speed.
Etzioni, Shelly; Erev, Ido; Ishaq, Robert; Elias, Wafa; Shiftan, Yoram
2017-09-01
In-vehicle data recorders (IVDR) have been found to facilitate safe driving and are highly valuable in accident analysis. Nevertheless, it is not easy to convince drivers to use them. Part of the difficulty is related to the "Big Brother" concern: installing IVDR impairs the drivers' privacy. The "Big Brother" concern can be mitigated by adding a turn-off switch to the IVDR. However, this addition comes at the expense of increasing speed variability between drivers, which is known to impair safety. The current experimental study examines the significance of this negative effect of a turn-off switch under two experimental settings representing different incentive structures: small and large fines for speeding. 199 students were asked to participate in a computerized speeding dilemma task, where they could control the speed of their "car" using "brake" and "speed" buttons, corresponding to automatic car foot pedals. The participants in two experimental conditions had IVDR installed in their "cars", and were told that they could turn it off at any time. Driving with active IVDR implied some probability of "fines" for speeding, and the two experimental groups differed with respect to the fine's magnitude, small or large. The results indicate that the option to use IVDR reduced speeding and speed variance. In addition, the results indicate that the reduction of speed variability was maximal in the small fine group. These results suggest that using IVDR with gentle fines and with a turn-off option maintains the positive effect of IVDR, addresses the "Big Brother" concern, and does not increase speed variance. Copyright © 2017 Elsevier Ltd. All rights reserved.
Scott, JoAnna M; deCamp, Allan; Juraska, Michal; Fay, Michael P; Gilbert, Peter B
2017-04-01
Stepped wedge designs are increasingly commonplace and advantageous for cluster randomized trials when it is both unethical to assign placebo, and it is logistically difficult to allocate an intervention simultaneously to many clusters. We study marginal mean models fit with generalized estimating equations for assessing treatment effectiveness in stepped wedge cluster randomized trials. This approach has advantages over the more commonly used mixed models that (1) the population-average parameters have an important interpretation for public health applications and (2) they avoid untestable assumptions on latent variable distributions and avoid parametric assumptions about error distributions, therefore, providing more robust evidence on treatment effects. However, cluster randomized trials typically have a small number of clusters, rendering the standard generalized estimating equation sandwich variance estimator biased and highly variable and hence yielding incorrect inferences. We study the usual asymptotic generalized estimating equation inferences (i.e., using sandwich variance estimators and asymptotic normality) and four small-sample corrections to generalized estimating equation for stepped wedge cluster randomized trials and for parallel cluster randomized trials as a comparison. We show by simulation that the small-sample corrections provide improvement, with one correction appearing to provide at least nominal coverage even with only 10 clusters per group. These results demonstrate the viability of the marginal mean approach for both stepped wedge and parallel cluster randomized trials. We also study the comparative performance of the corrected methods for stepped wedge and parallel designs, and describe how the methods can accommodate interval censoring of individual failure times and incorporate semiparametric efficient estimators.
Population is the main driver of war group size and conflict casualties.
Oka, Rahul C; Kissel, Marc; Golitko, Mark; Sheridan, Susan Guise; Kim, Nam C; Fuentes, Agustín
2017-12-26
The proportions of individuals involved in intergroup coalitional conflict, measured by war group size (W), conflict casualties (C), and overall group conflict deaths (G), have declined with respect to growing populations, implying that states are less violent than small-scale societies. We argue that these trends are better explained by scaling laws shared by both past and contemporary societies regardless of social organization, where group population (P) directly determines W and indirectly determines C and G. W is shown to be a power law function of P with scaling exponent X [demographic conflict investment (DCI)]. C is shown to be a power law function of W with scaling exponent Y [conflict lethality (CL)]. G is shown to be a power law function of P with scaling exponent Z [group conflict mortality (GCM)]. Results show that, while W/P and G/P decrease as expected with increasing P, C/W increases with growing W. Small-scale societies show higher but more variance in DCI and CL than contemporary states. We find no significant differences in DCI or CL between small-scale societies and contemporary states undergoing drafts or conflict, after accounting for variance and scale. We calculate relative measures of DCI and CL applicable to all societies that can be tracked over time for one or multiple actors. In light of the recent global emergence of populist, nationalist, and sectarian violence, our comparison-focused approach to DCI and CL will enable better models and analysis of the landscapes of violence in the 21st century. Copyright © 2017 the Author(s). Published by PNAS.